00:00:00.001 Started by upstream project "autotest-per-patch" build number 126126 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.095 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.096 The recommended git tool is: git 00:00:00.096 using credential 00000000-0000-0000-0000-000000000002 00:00:00.098 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.142 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.207 Using shallow fetch with depth 1 00:00:00.207 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.207 > git --version # timeout=10 00:00:00.260 > git --version # 'git version 2.39.2' 00:00:00.260 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.153 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.164 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.176 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:05.176 > git config core.sparsecheckout # timeout=10 00:00:05.187 > git read-tree -mu HEAD # timeout=10 00:00:05.204 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.225 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.225 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.318 [Pipeline] Start of Pipeline 00:00:05.331 [Pipeline] library 00:00:05.332 Loading library shm_lib@master 00:00:05.332 Library shm_lib@master is cached. Copying from home. 00:00:05.349 [Pipeline] node 00:00:05.357 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.360 [Pipeline] { 00:00:05.373 [Pipeline] catchError 00:00:05.374 [Pipeline] { 00:00:05.387 [Pipeline] wrap 00:00:05.394 [Pipeline] { 00:00:05.400 [Pipeline] stage 00:00:05.401 [Pipeline] { (Prologue) 00:00:05.571 [Pipeline] sh 00:00:05.855 + logger -p user.info -t JENKINS-CI 00:00:05.877 [Pipeline] echo 00:00:05.879 Node: GP6 00:00:05.885 [Pipeline] sh 00:00:06.182 [Pipeline] setCustomBuildProperty 00:00:06.191 [Pipeline] echo 00:00:06.193 Cleanup processes 00:00:06.197 [Pipeline] sh 00:00:06.476 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.476 4012005 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.492 [Pipeline] sh 00:00:06.787 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.787 ++ grep -v 'sudo pgrep' 00:00:06.787 ++ awk '{print $1}' 00:00:06.787 + sudo kill -9 00:00:06.787 + true 00:00:06.800 [Pipeline] cleanWs 00:00:06.808 [WS-CLEANUP] Deleting project workspace... 00:00:06.808 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.815 [WS-CLEANUP] done 00:00:06.818 [Pipeline] setCustomBuildProperty 00:00:06.829 [Pipeline] sh 00:00:07.105 + sudo git config --global --replace-all safe.directory '*' 00:00:07.176 [Pipeline] httpRequest 00:00:07.196 [Pipeline] echo 00:00:07.197 Sorcerer 10.211.164.101 is alive 00:00:07.205 [Pipeline] httpRequest 00:00:07.210 HttpMethod: GET 00:00:07.210 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.211 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.231 Response Code: HTTP/1.1 200 OK 00:00:07.231 Success: Status code 200 is in the accepted range: 200,404 00:00:07.232 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:28.827 [Pipeline] sh 00:00:29.106 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:29.119 [Pipeline] httpRequest 00:00:29.152 [Pipeline] echo 00:00:29.154 Sorcerer 10.211.164.101 is alive 00:00:29.160 [Pipeline] httpRequest 00:00:29.165 HttpMethod: GET 00:00:29.165 URL: http://10.211.164.101/packages/spdk_26acb15a675016f10031bd7ea7149f6d35a9ffea.tar.gz 00:00:29.166 Sending request to url: http://10.211.164.101/packages/spdk_26acb15a675016f10031bd7ea7149f6d35a9ffea.tar.gz 00:00:29.175 Response Code: HTTP/1.1 200 OK 00:00:29.175 Success: Status code 200 is in the accepted range: 200,404 00:00:29.176 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_26acb15a675016f10031bd7ea7149f6d35a9ffea.tar.gz 00:02:23.426 [Pipeline] sh 00:02:23.725 + tar --no-same-owner -xf spdk_26acb15a675016f10031bd7ea7149f6d35a9ffea.tar.gz 00:02:27.031 [Pipeline] sh 00:02:27.315 + git -C spdk log --oneline -n5 00:02:27.315 26acb15a6 nvme/pcie: allocate cq from device-local numa node's memory 00:02:27.315 be7837808 bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:02:27.315 cf710e481 nvme: populate socket_id for rdma controllers 00:02:27.315 f1ebf4106 nvme: populate socket_id for tcp controllers 00:02:27.315 41c6d27b6 nvme: populate socket_id for pcie controllers 00:02:27.328 [Pipeline] } 00:02:27.346 [Pipeline] // stage 00:02:27.357 [Pipeline] stage 00:02:27.359 [Pipeline] { (Prepare) 00:02:27.379 [Pipeline] writeFile 00:02:27.396 [Pipeline] sh 00:02:27.696 + logger -p user.info -t JENKINS-CI 00:02:27.709 [Pipeline] sh 00:02:27.994 + logger -p user.info -t JENKINS-CI 00:02:28.008 [Pipeline] sh 00:02:28.292 + cat autorun-spdk.conf 00:02:28.292 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.292 SPDK_TEST_NVMF=1 00:02:28.292 SPDK_TEST_NVME_CLI=1 00:02:28.292 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.292 SPDK_TEST_NVMF_NICS=e810 00:02:28.292 SPDK_TEST_VFIOUSER=1 00:02:28.292 SPDK_RUN_UBSAN=1 00:02:28.292 NET_TYPE=phy 00:02:28.300 RUN_NIGHTLY=0 00:02:28.306 [Pipeline] readFile 00:02:28.332 [Pipeline] withEnv 00:02:28.335 [Pipeline] { 00:02:28.350 [Pipeline] sh 00:02:28.636 + set -ex 00:02:28.636 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:28.636 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:28.636 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.636 ++ SPDK_TEST_NVMF=1 00:02:28.636 ++ SPDK_TEST_NVME_CLI=1 00:02:28.636 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:28.636 ++ SPDK_TEST_NVMF_NICS=e810 00:02:28.636 ++ SPDK_TEST_VFIOUSER=1 00:02:28.636 ++ SPDK_RUN_UBSAN=1 00:02:28.636 ++ NET_TYPE=phy 00:02:28.636 ++ RUN_NIGHTLY=0 00:02:28.636 + case $SPDK_TEST_NVMF_NICS in 00:02:28.636 + DRIVERS=ice 00:02:28.636 + [[ tcp == \r\d\m\a ]] 00:02:28.636 + [[ -n ice ]] 00:02:28.636 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:28.636 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:28.636 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:28.636 rmmod: ERROR: Module irdma is not currently loaded 00:02:28.636 rmmod: ERROR: Module i40iw is not currently loaded 00:02:28.636 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:28.636 + true 00:02:28.636 + for D in $DRIVERS 00:02:28.636 + sudo modprobe ice 00:02:28.636 + exit 0 00:02:28.646 [Pipeline] } 00:02:28.667 [Pipeline] // withEnv 00:02:28.673 [Pipeline] } 00:02:28.690 [Pipeline] // stage 00:02:28.701 [Pipeline] catchError 00:02:28.703 [Pipeline] { 00:02:28.719 [Pipeline] timeout 00:02:28.719 Timeout set to expire in 50 min 00:02:28.721 [Pipeline] { 00:02:28.738 [Pipeline] stage 00:02:28.741 [Pipeline] { (Tests) 00:02:28.758 [Pipeline] sh 00:02:29.068 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.068 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.068 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.068 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:29.068 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.068 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:29.069 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:29.069 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:29.069 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:29.069 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:29.069 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:29.069 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.069 + source /etc/os-release 00:02:29.069 ++ NAME='Fedora Linux' 00:02:29.069 ++ VERSION='38 (Cloud Edition)' 00:02:29.069 ++ ID=fedora 00:02:29.069 ++ VERSION_ID=38 00:02:29.069 ++ VERSION_CODENAME= 00:02:29.069 ++ PLATFORM_ID=platform:f38 00:02:29.069 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:29.069 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:29.069 ++ LOGO=fedora-logo-icon 00:02:29.069 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:29.069 ++ HOME_URL=https://fedoraproject.org/ 00:02:29.069 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:29.069 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:29.069 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:29.069 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:29.069 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:29.069 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:29.069 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:29.069 ++ SUPPORT_END=2024-05-14 00:02:29.069 ++ VARIANT='Cloud Edition' 00:02:29.069 ++ VARIANT_ID=cloud 00:02:29.069 + uname -a 00:02:29.069 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:29.069 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:30.008 Hugepages 00:02:30.008 node hugesize free / total 00:02:30.008 node0 1048576kB 0 / 0 00:02:30.008 node0 2048kB 0 / 0 00:02:30.008 node1 1048576kB 0 / 0 00:02:30.008 node1 2048kB 0 / 0 00:02:30.008 00:02:30.008 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:30.008 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:30.008 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:30.008 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:30.009 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:30.009 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:30.009 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:30.009 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:30.009 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:30.009 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:30.009 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:30.009 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:30.009 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:30.009 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:30.009 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:30.009 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:30.009 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:30.009 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:30.009 + rm -f /tmp/spdk-ld-path 00:02:30.009 + source autorun-spdk.conf 00:02:30.009 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.009 ++ SPDK_TEST_NVMF=1 00:02:30.009 ++ SPDK_TEST_NVME_CLI=1 00:02:30.009 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:30.009 ++ SPDK_TEST_NVMF_NICS=e810 00:02:30.009 ++ SPDK_TEST_VFIOUSER=1 00:02:30.009 ++ SPDK_RUN_UBSAN=1 00:02:30.009 ++ NET_TYPE=phy 00:02:30.009 ++ RUN_NIGHTLY=0 00:02:30.009 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:30.009 + [[ -n '' ]] 00:02:30.009 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:30.268 + for M in /var/spdk/build-*-manifest.txt 00:02:30.268 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:30.268 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:30.268 + for M in /var/spdk/build-*-manifest.txt 00:02:30.268 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:30.268 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:30.268 ++ uname 00:02:30.268 + [[ Linux == \L\i\n\u\x ]] 00:02:30.268 + sudo dmesg -T 00:02:30.268 + sudo dmesg --clear 00:02:30.268 + dmesg_pid=4012683 00:02:30.268 + [[ Fedora Linux == FreeBSD ]] 00:02:30.268 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.268 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:30.268 + sudo dmesg -Tw 00:02:30.268 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:30.268 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:30.268 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:30.268 + [[ -x /usr/src/fio-static/fio ]] 00:02:30.268 + export FIO_BIN=/usr/src/fio-static/fio 00:02:30.268 + FIO_BIN=/usr/src/fio-static/fio 00:02:30.268 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:30.268 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:30.268 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:30.268 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.268 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:30.268 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:30.268 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.268 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:30.268 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:30.268 Test configuration: 00:02:30.268 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:30.268 SPDK_TEST_NVMF=1 00:02:30.268 SPDK_TEST_NVME_CLI=1 00:02:30.268 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:30.268 SPDK_TEST_NVMF_NICS=e810 00:02:30.268 SPDK_TEST_VFIOUSER=1 00:02:30.268 SPDK_RUN_UBSAN=1 00:02:30.268 NET_TYPE=phy 00:02:30.268 RUN_NIGHTLY=0 15:38:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:30.268 15:38:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:30.268 15:38:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:30.268 15:38:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:30.268 15:38:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.268 15:38:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.268 15:38:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.268 15:38:59 -- paths/export.sh@5 -- $ export PATH 00:02:30.268 15:38:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:30.268 15:38:59 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:30.268 15:38:59 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:30.268 15:38:59 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720791539.XXXXXX 00:02:30.268 15:38:59 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720791539.IsDlzB 00:02:30.268 15:38:59 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:30.268 15:38:59 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:30.268 15:38:59 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:30.268 15:38:59 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:30.268 15:38:59 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:30.268 15:38:59 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:30.268 15:38:59 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:30.268 15:38:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.268 15:38:59 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:30.268 15:38:59 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:30.268 15:38:59 -- pm/common@17 -- $ local monitor 00:02:30.268 15:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.268 15:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.268 15:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.269 15:38:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:30.269 15:38:59 -- pm/common@21 -- $ date +%s 00:02:30.269 15:38:59 -- pm/common@21 -- $ date +%s 00:02:30.269 15:38:59 -- pm/common@25 -- $ sleep 1 00:02:30.269 15:38:59 -- pm/common@21 -- $ date +%s 00:02:30.269 15:38:59 -- pm/common@21 -- $ date +%s 00:02:30.269 15:38:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791539 00:02:30.269 15:38:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791539 00:02:30.269 15:38:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791539 00:02:30.269 15:38:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791539 00:02:30.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791539_collect-vmstat.pm.log 00:02:30.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791539_collect-cpu-load.pm.log 00:02:30.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791539_collect-cpu-temp.pm.log 00:02:30.269 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791539_collect-bmc-pm.bmc.pm.log 00:02:31.204 15:39:00 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:31.204 15:39:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:31.204 15:39:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:31.204 15:39:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.204 15:39:00 -- spdk/autobuild.sh@16 -- $ date -u 00:02:31.204 Fri Jul 12 01:39:00 PM UTC 2024 00:02:31.204 15:39:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:31.204 v24.09-pre-227-g26acb15a6 00:02:31.204 15:39:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:31.204 15:39:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:31.204 15:39:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:31.204 15:39:00 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:31.204 15:39:00 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:31.204 15:39:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.204 ************************************ 00:02:31.204 START TEST ubsan 00:02:31.204 ************************************ 00:02:31.204 15:39:00 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:31.204 using ubsan 00:02:31.204 00:02:31.204 real 0m0.000s 00:02:31.204 user 0m0.000s 00:02:31.204 sys 0m0.000s 00:02:31.204 15:39:00 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:31.204 15:39:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:31.204 ************************************ 00:02:31.204 END TEST ubsan 00:02:31.204 ************************************ 00:02:31.462 15:39:00 -- common/autotest_common.sh@1142 -- $ return 0 00:02:31.462 15:39:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:31.462 15:39:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:31.462 15:39:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:31.462 15:39:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:31.462 15:39:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:31.462 15:39:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:31.462 15:39:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:31.462 15:39:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:31.462 15:39:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:31.462 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:31.462 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:31.720 Using 'verbs' RDMA provider 00:02:42.260 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:52.234 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:52.234 Creating mk/config.mk...done. 00:02:52.234 Creating mk/cc.flags.mk...done. 00:02:52.234 Type 'make' to build. 00:02:52.234 15:39:21 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:52.234 15:39:21 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:52.234 15:39:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:52.234 15:39:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.234 ************************************ 00:02:52.234 START TEST make 00:02:52.234 ************************************ 00:02:52.234 15:39:21 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:52.492 make[1]: Nothing to be done for 'all'. 00:02:53.940 The Meson build system 00:02:53.940 Version: 1.3.1 00:02:53.940 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:53.940 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:53.940 Build type: native build 00:02:53.940 Project name: libvfio-user 00:02:53.940 Project version: 0.0.1 00:02:53.940 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:53.940 C linker for the host machine: cc ld.bfd 2.39-16 00:02:53.940 Host machine cpu family: x86_64 00:02:53.940 Host machine cpu: x86_64 00:02:53.940 Run-time dependency threads found: YES 00:02:53.940 Library dl found: YES 00:02:53.940 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:53.940 Run-time dependency json-c found: YES 0.17 00:02:53.940 Run-time dependency cmocka found: YES 1.1.7 00:02:53.940 Program pytest-3 found: NO 00:02:53.940 Program flake8 found: NO 00:02:53.940 Program misspell-fixer found: NO 00:02:53.940 Program restructuredtext-lint found: NO 00:02:53.940 Program valgrind found: YES (/usr/bin/valgrind) 00:02:53.940 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.940 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.940 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.940 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:53.940 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:53.940 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:53.940 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:53.940 Build targets in project: 8 00:02:53.940 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:53.940 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:53.940 00:02:53.940 libvfio-user 0.0.1 00:02:53.940 00:02:53.940 User defined options 00:02:53.940 buildtype : debug 00:02:53.940 default_library: shared 00:02:53.940 libdir : /usr/local/lib 00:02:53.940 00:02:53.940 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.896 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:54.896 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:54.896 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:54.896 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:54.896 [4/37] Compiling C object samples/null.p/null.c.o 00:02:54.896 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:54.896 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:54.896 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:54.896 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:54.896 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:54.896 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:54.896 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:54.896 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:54.896 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:54.896 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:54.896 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:55.160 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:55.160 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:55.160 [18/37] Compiling C object samples/server.p/server.c.o 00:02:55.160 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:55.160 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:55.160 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:55.160 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:55.160 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:55.160 [24/37] Compiling C object samples/client.p/client.c.o 00:02:55.160 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:55.160 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:55.160 [27/37] Linking target samples/client 00:02:55.160 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:55.160 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:55.160 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:55.421 [31/37] Linking target test/unit_tests 00:02:55.421 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:55.421 [33/37] Linking target samples/server 00:02:55.421 [34/37] Linking target samples/null 00:02:55.421 [35/37] Linking target samples/gpio-pci-idio-16 00:02:55.421 [36/37] Linking target samples/lspci 00:02:55.421 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:55.684 INFO: autodetecting backend as ninja 00:02:55.684 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:55.684 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:56.260 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:56.260 ninja: no work to do. 00:03:01.534 The Meson build system 00:03:01.534 Version: 1.3.1 00:03:01.534 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:03:01.534 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:03:01.534 Build type: native build 00:03:01.534 Program cat found: YES (/usr/bin/cat) 00:03:01.534 Project name: DPDK 00:03:01.534 Project version: 24.03.0 00:03:01.534 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:01.534 C linker for the host machine: cc ld.bfd 2.39-16 00:03:01.534 Host machine cpu family: x86_64 00:03:01.534 Host machine cpu: x86_64 00:03:01.534 Message: ## Building in Developer Mode ## 00:03:01.534 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.534 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:03:01.534 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.534 Program python3 found: YES (/usr/bin/python3) 00:03:01.534 Program cat found: YES (/usr/bin/cat) 00:03:01.534 Compiler for C supports arguments -march=native: YES 00:03:01.534 Checking for size of "void *" : 8 00:03:01.534 Checking for size of "void *" : 8 (cached) 00:03:01.534 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:03:01.534 Library m found: YES 00:03:01.534 Library numa found: YES 00:03:01.534 Has header "numaif.h" : YES 00:03:01.534 Library fdt found: NO 00:03:01.534 Library execinfo found: NO 00:03:01.534 Has header "execinfo.h" : YES 00:03:01.534 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:01.534 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.534 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.534 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.534 Run-time dependency openssl found: YES 3.0.9 00:03:01.534 Run-time dependency libpcap found: YES 1.10.4 00:03:01.534 Has header "pcap.h" with dependency libpcap: YES 00:03:01.534 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.534 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.534 Compiler for C supports arguments -Wformat: YES 00:03:01.534 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.534 Compiler for C supports arguments -Wformat-security: NO 00:03:01.534 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.534 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.534 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.534 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.534 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.534 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.534 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.534 Compiler for C supports arguments -Wundef: YES 00:03:01.534 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.534 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.534 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.534 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.534 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.534 Program objdump found: YES (/usr/bin/objdump) 00:03:01.534 Compiler for C supports arguments -mavx512f: YES 00:03:01.534 Checking if "AVX512 checking" compiles: YES 00:03:01.534 Fetching value of define "__SSE4_2__" : 1 00:03:01.534 Fetching value of define "__AES__" : 1 00:03:01.534 Fetching value of define "__AVX__" : 1 00:03:01.534 Fetching value of define "__AVX2__" : (undefined) 00:03:01.534 Fetching value of define "__AVX512BW__" : (undefined) 00:03:01.534 Fetching value of define "__AVX512CD__" : (undefined) 00:03:01.534 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:01.534 Fetching value of define "__AVX512F__" : (undefined) 00:03:01.534 Fetching value of define "__AVX512VL__" : (undefined) 00:03:01.534 Fetching value of define "__PCLMUL__" : 1 00:03:01.534 Fetching value of define "__RDRND__" : 1 00:03:01.534 Fetching value of define "__RDSEED__" : (undefined) 00:03:01.534 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:01.534 Fetching value of define "__znver1__" : (undefined) 00:03:01.534 Fetching value of define "__znver2__" : (undefined) 00:03:01.534 Fetching value of define "__znver3__" : (undefined) 00:03:01.534 Fetching value of define "__znver4__" : (undefined) 00:03:01.534 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.534 Message: lib/log: Defining dependency "log" 00:03:01.534 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.534 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.534 Checking for function "getentropy" : NO 00:03:01.534 Message: lib/eal: Defining dependency "eal" 00:03:01.534 Message: lib/ring: Defining dependency "ring" 00:03:01.534 Message: lib/rcu: Defining dependency "rcu" 00:03:01.534 Message: lib/mempool: Defining dependency "mempool" 00:03:01.534 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.534 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.534 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:01.534 Compiler for C supports arguments -mpclmul: YES 00:03:01.534 Compiler for C supports arguments -maes: YES 00:03:01.534 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.534 Compiler for C supports arguments -mavx512bw: YES 00:03:01.534 Compiler for C supports arguments -mavx512dq: YES 00:03:01.534 Compiler for C supports arguments -mavx512vl: YES 00:03:01.535 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.535 Compiler for C supports arguments -mavx2: YES 00:03:01.535 Compiler for C supports arguments -mavx: YES 00:03:01.535 Message: lib/net: Defining dependency "net" 00:03:01.535 Message: lib/meter: Defining dependency "meter" 00:03:01.535 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.535 Message: lib/pci: Defining dependency "pci" 00:03:01.535 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.535 Message: lib/hash: Defining dependency "hash" 00:03:01.535 Message: lib/timer: Defining dependency "timer" 00:03:01.535 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.535 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.535 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.535 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.535 Message: lib/power: Defining dependency "power" 00:03:01.535 Message: lib/reorder: Defining dependency "reorder" 00:03:01.535 Message: lib/security: Defining dependency "security" 00:03:01.535 Has header "linux/userfaultfd.h" : YES 00:03:01.535 Has header "linux/vduse.h" : YES 00:03:01.535 Message: lib/vhost: Defining dependency "vhost" 00:03:01.535 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.535 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.535 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.535 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:01.535 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:01.535 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:01.535 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:01.535 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:01.535 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:01.535 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:01.535 Program doxygen found: YES (/usr/bin/doxygen) 00:03:01.535 Configuring doxy-api-html.conf using configuration 00:03:01.535 Configuring doxy-api-man.conf using configuration 00:03:01.535 Program mandb found: YES (/usr/bin/mandb) 00:03:01.535 Program sphinx-build found: NO 00:03:01.535 Configuring rte_build_config.h using configuration 00:03:01.535 Message: 00:03:01.535 ================= 00:03:01.535 Applications Enabled 00:03:01.535 ================= 00:03:01.535 00:03:01.535 apps: 00:03:01.535 00:03:01.535 00:03:01.535 Message: 00:03:01.535 ================= 00:03:01.535 Libraries Enabled 00:03:01.535 ================= 00:03:01.535 00:03:01.535 libs: 00:03:01.535 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:01.535 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:01.535 cryptodev, dmadev, power, reorder, security, vhost, 00:03:01.535 00:03:01.535 Message: 00:03:01.535 =============== 00:03:01.535 Drivers Enabled 00:03:01.535 =============== 00:03:01.535 00:03:01.535 common: 00:03:01.535 00:03:01.535 bus: 00:03:01.535 pci, vdev, 00:03:01.535 mempool: 00:03:01.535 ring, 00:03:01.535 dma: 00:03:01.535 00:03:01.535 net: 00:03:01.535 00:03:01.535 crypto: 00:03:01.535 00:03:01.535 compress: 00:03:01.535 00:03:01.535 vdpa: 00:03:01.535 00:03:01.535 00:03:01.535 Message: 00:03:01.535 ================= 00:03:01.535 Content Skipped 00:03:01.535 ================= 00:03:01.535 00:03:01.535 apps: 00:03:01.535 dumpcap: explicitly disabled via build config 00:03:01.535 graph: explicitly disabled via build config 00:03:01.535 pdump: explicitly disabled via build config 00:03:01.535 proc-info: explicitly disabled via build config 00:03:01.535 test-acl: explicitly disabled via build config 00:03:01.535 test-bbdev: explicitly disabled via build config 00:03:01.535 test-cmdline: explicitly disabled via build config 00:03:01.535 test-compress-perf: explicitly disabled via build config 00:03:01.535 test-crypto-perf: explicitly disabled via build config 00:03:01.535 test-dma-perf: explicitly disabled via build config 00:03:01.535 test-eventdev: explicitly disabled via build config 00:03:01.535 test-fib: explicitly disabled via build config 00:03:01.535 test-flow-perf: explicitly disabled via build config 00:03:01.535 test-gpudev: explicitly disabled via build config 00:03:01.535 test-mldev: explicitly disabled via build config 00:03:01.535 test-pipeline: explicitly disabled via build config 00:03:01.535 test-pmd: explicitly disabled via build config 00:03:01.535 test-regex: explicitly disabled via build config 00:03:01.535 test-sad: explicitly disabled via build config 00:03:01.535 test-security-perf: explicitly disabled via build config 00:03:01.535 00:03:01.535 libs: 00:03:01.535 argparse: explicitly disabled via build config 00:03:01.535 metrics: explicitly disabled via build config 00:03:01.535 acl: explicitly disabled via build config 00:03:01.535 bbdev: explicitly disabled via build config 00:03:01.535 bitratestats: explicitly disabled via build config 00:03:01.535 bpf: explicitly disabled via build config 00:03:01.535 cfgfile: explicitly disabled via build config 00:03:01.535 distributor: explicitly disabled via build config 00:03:01.535 efd: explicitly disabled via build config 00:03:01.535 eventdev: explicitly disabled via build config 00:03:01.535 dispatcher: explicitly disabled via build config 00:03:01.535 gpudev: explicitly disabled via build config 00:03:01.535 gro: explicitly disabled via build config 00:03:01.535 gso: explicitly disabled via build config 00:03:01.535 ip_frag: explicitly disabled via build config 00:03:01.535 jobstats: explicitly disabled via build config 00:03:01.535 latencystats: explicitly disabled via build config 00:03:01.535 lpm: explicitly disabled via build config 00:03:01.535 member: explicitly disabled via build config 00:03:01.535 pcapng: explicitly disabled via build config 00:03:01.535 rawdev: explicitly disabled via build config 00:03:01.535 regexdev: explicitly disabled via build config 00:03:01.535 mldev: explicitly disabled via build config 00:03:01.535 rib: explicitly disabled via build config 00:03:01.535 sched: explicitly disabled via build config 00:03:01.535 stack: explicitly disabled via build config 00:03:01.535 ipsec: explicitly disabled via build config 00:03:01.535 pdcp: explicitly disabled via build config 00:03:01.535 fib: explicitly disabled via build config 00:03:01.535 port: explicitly disabled via build config 00:03:01.535 pdump: explicitly disabled via build config 00:03:01.535 table: explicitly disabled via build config 00:03:01.535 pipeline: explicitly disabled via build config 00:03:01.535 graph: explicitly disabled via build config 00:03:01.535 node: explicitly disabled via build config 00:03:01.535 00:03:01.535 drivers: 00:03:01.535 common/cpt: not in enabled drivers build config 00:03:01.535 common/dpaax: not in enabled drivers build config 00:03:01.535 common/iavf: not in enabled drivers build config 00:03:01.535 common/idpf: not in enabled drivers build config 00:03:01.535 common/ionic: not in enabled drivers build config 00:03:01.535 common/mvep: not in enabled drivers build config 00:03:01.535 common/octeontx: not in enabled drivers build config 00:03:01.535 bus/auxiliary: not in enabled drivers build config 00:03:01.535 bus/cdx: not in enabled drivers build config 00:03:01.535 bus/dpaa: not in enabled drivers build config 00:03:01.535 bus/fslmc: not in enabled drivers build config 00:03:01.535 bus/ifpga: not in enabled drivers build config 00:03:01.535 bus/platform: not in enabled drivers build config 00:03:01.535 bus/uacce: not in enabled drivers build config 00:03:01.535 bus/vmbus: not in enabled drivers build config 00:03:01.535 common/cnxk: not in enabled drivers build config 00:03:01.535 common/mlx5: not in enabled drivers build config 00:03:01.535 common/nfp: not in enabled drivers build config 00:03:01.535 common/nitrox: not in enabled drivers build config 00:03:01.535 common/qat: not in enabled drivers build config 00:03:01.535 common/sfc_efx: not in enabled drivers build config 00:03:01.535 mempool/bucket: not in enabled drivers build config 00:03:01.535 mempool/cnxk: not in enabled drivers build config 00:03:01.535 mempool/dpaa: not in enabled drivers build config 00:03:01.535 mempool/dpaa2: not in enabled drivers build config 00:03:01.535 mempool/octeontx: not in enabled drivers build config 00:03:01.535 mempool/stack: not in enabled drivers build config 00:03:01.535 dma/cnxk: not in enabled drivers build config 00:03:01.535 dma/dpaa: not in enabled drivers build config 00:03:01.535 dma/dpaa2: not in enabled drivers build config 00:03:01.535 dma/hisilicon: not in enabled drivers build config 00:03:01.535 dma/idxd: not in enabled drivers build config 00:03:01.535 dma/ioat: not in enabled drivers build config 00:03:01.535 dma/skeleton: not in enabled drivers build config 00:03:01.535 net/af_packet: not in enabled drivers build config 00:03:01.535 net/af_xdp: not in enabled drivers build config 00:03:01.535 net/ark: not in enabled drivers build config 00:03:01.535 net/atlantic: not in enabled drivers build config 00:03:01.535 net/avp: not in enabled drivers build config 00:03:01.535 net/axgbe: not in enabled drivers build config 00:03:01.535 net/bnx2x: not in enabled drivers build config 00:03:01.535 net/bnxt: not in enabled drivers build config 00:03:01.535 net/bonding: not in enabled drivers build config 00:03:01.535 net/cnxk: not in enabled drivers build config 00:03:01.535 net/cpfl: not in enabled drivers build config 00:03:01.535 net/cxgbe: not in enabled drivers build config 00:03:01.535 net/dpaa: not in enabled drivers build config 00:03:01.535 net/dpaa2: not in enabled drivers build config 00:03:01.535 net/e1000: not in enabled drivers build config 00:03:01.535 net/ena: not in enabled drivers build config 00:03:01.535 net/enetc: not in enabled drivers build config 00:03:01.535 net/enetfec: not in enabled drivers build config 00:03:01.535 net/enic: not in enabled drivers build config 00:03:01.535 net/failsafe: not in enabled drivers build config 00:03:01.535 net/fm10k: not in enabled drivers build config 00:03:01.535 net/gve: not in enabled drivers build config 00:03:01.535 net/hinic: not in enabled drivers build config 00:03:01.535 net/hns3: not in enabled drivers build config 00:03:01.535 net/i40e: not in enabled drivers build config 00:03:01.535 net/iavf: not in enabled drivers build config 00:03:01.535 net/ice: not in enabled drivers build config 00:03:01.535 net/idpf: not in enabled drivers build config 00:03:01.535 net/igc: not in enabled drivers build config 00:03:01.535 net/ionic: not in enabled drivers build config 00:03:01.535 net/ipn3ke: not in enabled drivers build config 00:03:01.535 net/ixgbe: not in enabled drivers build config 00:03:01.535 net/mana: not in enabled drivers build config 00:03:01.535 net/memif: not in enabled drivers build config 00:03:01.535 net/mlx4: not in enabled drivers build config 00:03:01.535 net/mlx5: not in enabled drivers build config 00:03:01.535 net/mvneta: not in enabled drivers build config 00:03:01.535 net/mvpp2: not in enabled drivers build config 00:03:01.535 net/netvsc: not in enabled drivers build config 00:03:01.536 net/nfb: not in enabled drivers build config 00:03:01.536 net/nfp: not in enabled drivers build config 00:03:01.536 net/ngbe: not in enabled drivers build config 00:03:01.536 net/null: not in enabled drivers build config 00:03:01.536 net/octeontx: not in enabled drivers build config 00:03:01.536 net/octeon_ep: not in enabled drivers build config 00:03:01.536 net/pcap: not in enabled drivers build config 00:03:01.536 net/pfe: not in enabled drivers build config 00:03:01.536 net/qede: not in enabled drivers build config 00:03:01.536 net/ring: not in enabled drivers build config 00:03:01.536 net/sfc: not in enabled drivers build config 00:03:01.536 net/softnic: not in enabled drivers build config 00:03:01.536 net/tap: not in enabled drivers build config 00:03:01.536 net/thunderx: not in enabled drivers build config 00:03:01.536 net/txgbe: not in enabled drivers build config 00:03:01.536 net/vdev_netvsc: not in enabled drivers build config 00:03:01.536 net/vhost: not in enabled drivers build config 00:03:01.536 net/virtio: not in enabled drivers build config 00:03:01.536 net/vmxnet3: not in enabled drivers build config 00:03:01.536 raw/*: missing internal dependency, "rawdev" 00:03:01.536 crypto/armv8: not in enabled drivers build config 00:03:01.536 crypto/bcmfs: not in enabled drivers build config 00:03:01.536 crypto/caam_jr: not in enabled drivers build config 00:03:01.536 crypto/ccp: not in enabled drivers build config 00:03:01.536 crypto/cnxk: not in enabled drivers build config 00:03:01.536 crypto/dpaa_sec: not in enabled drivers build config 00:03:01.536 crypto/dpaa2_sec: not in enabled drivers build config 00:03:01.536 crypto/ipsec_mb: not in enabled drivers build config 00:03:01.536 crypto/mlx5: not in enabled drivers build config 00:03:01.536 crypto/mvsam: not in enabled drivers build config 00:03:01.536 crypto/nitrox: not in enabled drivers build config 00:03:01.536 crypto/null: not in enabled drivers build config 00:03:01.536 crypto/octeontx: not in enabled drivers build config 00:03:01.536 crypto/openssl: not in enabled drivers build config 00:03:01.536 crypto/scheduler: not in enabled drivers build config 00:03:01.536 crypto/uadk: not in enabled drivers build config 00:03:01.536 crypto/virtio: not in enabled drivers build config 00:03:01.536 compress/isal: not in enabled drivers build config 00:03:01.536 compress/mlx5: not in enabled drivers build config 00:03:01.536 compress/nitrox: not in enabled drivers build config 00:03:01.536 compress/octeontx: not in enabled drivers build config 00:03:01.536 compress/zlib: not in enabled drivers build config 00:03:01.536 regex/*: missing internal dependency, "regexdev" 00:03:01.536 ml/*: missing internal dependency, "mldev" 00:03:01.536 vdpa/ifc: not in enabled drivers build config 00:03:01.536 vdpa/mlx5: not in enabled drivers build config 00:03:01.536 vdpa/nfp: not in enabled drivers build config 00:03:01.536 vdpa/sfc: not in enabled drivers build config 00:03:01.536 event/*: missing internal dependency, "eventdev" 00:03:01.536 baseband/*: missing internal dependency, "bbdev" 00:03:01.536 gpu/*: missing internal dependency, "gpudev" 00:03:01.536 00:03:01.536 00:03:01.536 Build targets in project: 85 00:03:01.536 00:03:01.536 DPDK 24.03.0 00:03:01.536 00:03:01.536 User defined options 00:03:01.536 buildtype : debug 00:03:01.536 default_library : shared 00:03:01.536 libdir : lib 00:03:01.536 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:01.536 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:01.536 c_link_args : 00:03:01.536 cpu_instruction_set: native 00:03:01.536 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:03:01.536 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:03:01.536 enable_docs : false 00:03:01.536 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:01.536 enable_kmods : false 00:03:01.536 max_lcores : 128 00:03:01.536 tests : false 00:03:01.536 00:03:01.536 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.536 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:03:01.536 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:01.536 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:01.536 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:01.536 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:01.536 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:01.536 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:01.536 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:01.536 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.536 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:01.536 [10/268] Linking static target lib/librte_kvargs.a 00:03:01.536 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:01.536 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:01.794 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:01.794 [14/268] Linking static target lib/librte_log.a 00:03:01.794 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:01.794 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:02.366 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.366 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:02.366 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:02.366 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:02.366 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:02.366 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:02.366 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:02.366 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:02.366 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:02.627 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:02.627 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:02.627 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:02.627 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:02.627 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:02.627 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:02.627 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:02.627 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:02.627 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:02.627 [35/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:02.627 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:02.627 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:02.627 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:02.627 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:02.627 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:02.627 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:02.627 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:02.627 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:02.627 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:02.627 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:02.627 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:02.627 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:02.627 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:02.627 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:02.627 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:02.627 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.627 [52/268] Linking static target lib/librte_telemetry.a 00:03:02.627 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:02.627 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:02.627 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:02.627 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:02.627 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:02.627 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:02.627 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.627 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:02.627 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:02.627 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:02.627 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:02.898 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.898 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.898 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:02.898 [67/268] Linking target lib/librte_log.so.24.1 00:03:03.157 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:03.157 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.157 [70/268] Linking static target lib/librte_pci.a 00:03:03.157 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:03.157 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:03.419 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.419 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:03.419 [75/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:03.419 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:03.419 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:03.419 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:03.419 [79/268] Linking target lib/librte_kvargs.so.24.1 00:03:03.419 [80/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:03.419 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.419 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:03.419 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:03.419 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:03.419 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:03.419 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:03.419 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:03.419 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.419 [89/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.419 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.419 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.419 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:03.419 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:03.419 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.419 [95/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:03.419 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.684 [97/268] Linking static target lib/librte_ring.a 00:03:03.684 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:03.684 [99/268] Linking static target lib/librte_meter.a 00:03:03.684 [100/268] Linking target lib/librte_telemetry.so.24.1 00:03:03.684 [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:03.684 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:03.684 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:03.684 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.684 [105/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:03.684 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:03.684 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:03.684 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:03.684 [109/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.684 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:03.684 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.684 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:03.684 [113/268] Linking static target lib/librte_eal.a 00:03:03.684 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:03.684 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:03.684 [116/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:03.684 [117/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.684 [118/268] Linking static target lib/librte_mempool.a 00:03:03.684 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.684 [120/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.684 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:03.684 [122/268] Linking static target lib/librte_rcu.a 00:03:03.684 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:03.684 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:03.684 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:03.948 [126/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:03.948 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:03.948 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:03.948 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:03.948 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.948 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:03.948 [132/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:03.948 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:03.948 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.210 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:04.210 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:04.210 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.210 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:04.210 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:04.210 [140/268] Linking static target lib/librte_net.a 00:03:04.210 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.210 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.469 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:04.469 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.469 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:04.469 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.469 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:04.469 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.469 [149/268] Linking static target lib/librte_cmdline.a 00:03:04.469 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:04.469 [151/268] Linking static target lib/librte_timer.a 00:03:04.469 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.469 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:04.469 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:04.727 [155/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.727 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.727 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:04.727 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.727 [159/268] Linking static target lib/librte_dmadev.a 00:03:04.727 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.727 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.727 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.727 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:04.727 [164/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.727 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.727 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:04.986 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.986 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:04.986 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.986 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.986 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.986 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:04.986 [173/268] Linking static target lib/librte_power.a 00:03:04.986 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.986 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.986 [176/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.986 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.986 [178/268] Linking static target lib/librte_compressdev.a 00:03:04.986 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.986 [180/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:04.986 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.986 [182/268] Linking static target lib/librte_mbuf.a 00:03:05.244 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.244 [184/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.244 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.244 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.244 [187/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.244 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:05.244 [189/268] Linking static target lib/librte_hash.a 00:03:05.244 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.244 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.244 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.244 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.244 [194/268] Linking static target lib/librte_reorder.a 00:03:05.244 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:05.244 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.244 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.244 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.244 [199/268] Linking static target lib/librte_security.a 00:03:05.501 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.501 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.501 [202/268] Linking static target drivers/librte_bus_vdev.a 00:03:05.501 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:05.501 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.501 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.501 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.501 [207/268] Linking static target drivers/librte_bus_pci.a 00:03:05.501 [208/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.502 [209/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.502 [210/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.502 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.502 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.502 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.502 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.760 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:05.760 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.760 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.760 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.760 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.760 [220/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.760 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.760 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:05.760 [223/268] Linking static target lib/librte_ethdev.a 00:03:06.018 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.018 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:06.018 [226/268] Linking static target lib/librte_cryptodev.a 00:03:06.952 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.886 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:09.852 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.109 [230/268] Linking target lib/librte_eal.so.24.1 00:03:10.109 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.109 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:10.109 [233/268] Linking target lib/librte_timer.so.24.1 00:03:10.109 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:10.109 [235/268] Linking target lib/librte_ring.so.24.1 00:03:10.109 [236/268] Linking target lib/librte_pci.so.24.1 00:03:10.109 [237/268] Linking target lib/librte_meter.so.24.1 00:03:10.109 [238/268] Linking target lib/librte_dmadev.so.24.1 00:03:10.367 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:10.367 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:10.367 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:10.367 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:10.367 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:10.367 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:10.367 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:10.367 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:10.625 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:10.625 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:10.625 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:10.625 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:10.625 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:10.625 [252/268] Linking target lib/librte_compressdev.so.24.1 00:03:10.625 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:10.625 [254/268] Linking target lib/librte_net.so.24.1 00:03:10.625 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:10.883 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:10.883 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:10.883 [258/268] Linking target lib/librte_hash.so.24.1 00:03:10.883 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:10.883 [260/268] Linking target lib/librte_security.so.24.1 00:03:10.883 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:11.163 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:11.163 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:11.163 [264/268] Linking target lib/librte_power.so.24.1 00:03:13.686 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:13.686 [266/268] Linking static target lib/librte_vhost.a 00:03:14.619 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.619 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:14.619 INFO: autodetecting backend as ninja 00:03:14.619 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:15.553 CC lib/ut/ut.o 00:03:15.553 CC lib/log/log.o 00:03:15.553 CC lib/log/log_flags.o 00:03:15.553 CC lib/log/log_deprecated.o 00:03:15.553 CC lib/ut_mock/mock.o 00:03:15.811 LIB libspdk_log.a 00:03:15.811 LIB libspdk_ut.a 00:03:15.811 LIB libspdk_ut_mock.a 00:03:15.811 SO libspdk_ut.so.2.0 00:03:15.811 SO libspdk_log.so.7.0 00:03:15.811 SO libspdk_ut_mock.so.6.0 00:03:15.811 SYMLINK libspdk_ut.so 00:03:15.811 SYMLINK libspdk_ut_mock.so 00:03:15.811 SYMLINK libspdk_log.so 00:03:16.069 CXX lib/trace_parser/trace.o 00:03:16.069 CC lib/ioat/ioat.o 00:03:16.069 CC lib/util/base64.o 00:03:16.069 CC lib/dma/dma.o 00:03:16.069 CC lib/util/bit_array.o 00:03:16.069 CC lib/util/cpuset.o 00:03:16.069 CC lib/util/crc16.o 00:03:16.069 CC lib/util/crc32.o 00:03:16.069 CC lib/util/crc32c.o 00:03:16.069 CC lib/util/crc32_ieee.o 00:03:16.069 CC lib/util/crc64.o 00:03:16.069 CC lib/util/dif.o 00:03:16.069 CC lib/util/fd.o 00:03:16.069 CC lib/util/fd_group.o 00:03:16.069 CC lib/util/file.o 00:03:16.069 CC lib/util/hexlify.o 00:03:16.069 CC lib/util/iov.o 00:03:16.069 CC lib/util/math.o 00:03:16.069 CC lib/util/net.o 00:03:16.069 CC lib/util/pipe.o 00:03:16.069 CC lib/util/strerror_tls.o 00:03:16.069 CC lib/util/string.o 00:03:16.069 CC lib/util/uuid.o 00:03:16.069 CC lib/util/xor.o 00:03:16.069 CC lib/util/zipf.o 00:03:16.069 CC lib/vfio_user/host/vfio_user_pci.o 00:03:16.069 CC lib/vfio_user/host/vfio_user.o 00:03:16.328 LIB libspdk_dma.a 00:03:16.328 SO libspdk_dma.so.4.0 00:03:16.328 SYMLINK libspdk_dma.so 00:03:16.328 LIB libspdk_ioat.a 00:03:16.328 LIB libspdk_vfio_user.a 00:03:16.328 SO libspdk_ioat.so.7.0 00:03:16.328 SO libspdk_vfio_user.so.5.0 00:03:16.328 SYMLINK libspdk_ioat.so 00:03:16.328 SYMLINK libspdk_vfio_user.so 00:03:16.585 LIB libspdk_util.a 00:03:16.585 SO libspdk_util.so.9.1 00:03:16.843 SYMLINK libspdk_util.so 00:03:16.843 CC lib/conf/conf.o 00:03:16.843 CC lib/vmd/vmd.o 00:03:16.843 CC lib/idxd/idxd.o 00:03:16.843 CC lib/rdma_provider/common.o 00:03:16.843 CC lib/json/json_parse.o 00:03:16.843 CC lib/rdma_utils/rdma_utils.o 00:03:16.843 CC lib/vmd/led.o 00:03:16.843 CC lib/env_dpdk/env.o 00:03:16.843 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:16.843 CC lib/json/json_util.o 00:03:16.843 CC lib/idxd/idxd_user.o 00:03:16.843 CC lib/env_dpdk/memory.o 00:03:16.843 CC lib/json/json_write.o 00:03:16.843 CC lib/idxd/idxd_kernel.o 00:03:16.843 CC lib/env_dpdk/pci.o 00:03:16.843 CC lib/env_dpdk/init.o 00:03:16.843 CC lib/env_dpdk/threads.o 00:03:16.843 CC lib/env_dpdk/pci_ioat.o 00:03:16.843 CC lib/env_dpdk/pci_virtio.o 00:03:16.843 CC lib/env_dpdk/pci_vmd.o 00:03:16.843 CC lib/env_dpdk/pci_idxd.o 00:03:16.843 CC lib/env_dpdk/pci_event.o 00:03:16.843 CC lib/env_dpdk/pci_dpdk.o 00:03:16.843 CC lib/env_dpdk/sigbus_handler.o 00:03:16.843 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.843 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.102 LIB libspdk_trace_parser.a 00:03:17.102 SO libspdk_trace_parser.so.5.0 00:03:17.102 LIB libspdk_rdma_provider.a 00:03:17.102 SYMLINK libspdk_trace_parser.so 00:03:17.102 SO libspdk_rdma_provider.so.6.0 00:03:17.102 LIB libspdk_rdma_utils.a 00:03:17.102 LIB libspdk_conf.a 00:03:17.102 SYMLINK libspdk_rdma_provider.so 00:03:17.359 SO libspdk_rdma_utils.so.1.0 00:03:17.359 SO libspdk_conf.so.6.0 00:03:17.359 LIB libspdk_json.a 00:03:17.359 SYMLINK libspdk_conf.so 00:03:17.359 SYMLINK libspdk_rdma_utils.so 00:03:17.359 SO libspdk_json.so.6.0 00:03:17.359 SYMLINK libspdk_json.so 00:03:17.617 LIB libspdk_idxd.a 00:03:17.617 CC lib/jsonrpc/jsonrpc_server.o 00:03:17.617 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:17.617 CC lib/jsonrpc/jsonrpc_client.o 00:03:17.617 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:17.617 SO libspdk_idxd.so.12.0 00:03:17.617 SYMLINK libspdk_idxd.so 00:03:17.617 LIB libspdk_vmd.a 00:03:17.617 SO libspdk_vmd.so.6.0 00:03:17.617 SYMLINK libspdk_vmd.so 00:03:17.875 LIB libspdk_jsonrpc.a 00:03:17.875 SO libspdk_jsonrpc.so.6.0 00:03:17.875 SYMLINK libspdk_jsonrpc.so 00:03:18.133 CC lib/rpc/rpc.o 00:03:18.133 LIB libspdk_rpc.a 00:03:18.390 SO libspdk_rpc.so.6.0 00:03:18.390 SYMLINK libspdk_rpc.so 00:03:18.390 CC lib/trace/trace.o 00:03:18.391 CC lib/trace/trace_flags.o 00:03:18.391 CC lib/trace/trace_rpc.o 00:03:18.391 CC lib/keyring/keyring.o 00:03:18.391 CC lib/keyring/keyring_rpc.o 00:03:18.391 CC lib/notify/notify.o 00:03:18.391 CC lib/notify/notify_rpc.o 00:03:18.648 LIB libspdk_notify.a 00:03:18.648 SO libspdk_notify.so.6.0 00:03:18.648 LIB libspdk_keyring.a 00:03:18.648 SYMLINK libspdk_notify.so 00:03:18.648 LIB libspdk_trace.a 00:03:18.648 SO libspdk_keyring.so.1.0 00:03:18.905 SO libspdk_trace.so.10.0 00:03:18.905 SYMLINK libspdk_keyring.so 00:03:18.905 SYMLINK libspdk_trace.so 00:03:18.905 LIB libspdk_env_dpdk.a 00:03:18.905 SO libspdk_env_dpdk.so.15.0 00:03:18.905 CC lib/thread/thread.o 00:03:18.905 CC lib/thread/iobuf.o 00:03:18.905 CC lib/sock/sock.o 00:03:18.905 CC lib/sock/sock_rpc.o 00:03:19.163 SYMLINK libspdk_env_dpdk.so 00:03:19.420 LIB libspdk_sock.a 00:03:19.420 SO libspdk_sock.so.10.0 00:03:19.420 SYMLINK libspdk_sock.so 00:03:19.678 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:19.678 CC lib/nvme/nvme_ctrlr.o 00:03:19.678 CC lib/nvme/nvme_fabric.o 00:03:19.678 CC lib/nvme/nvme_ns_cmd.o 00:03:19.678 CC lib/nvme/nvme_ns.o 00:03:19.678 CC lib/nvme/nvme_pcie_common.o 00:03:19.678 CC lib/nvme/nvme_pcie.o 00:03:19.678 CC lib/nvme/nvme_qpair.o 00:03:19.678 CC lib/nvme/nvme.o 00:03:19.678 CC lib/nvme/nvme_quirks.o 00:03:19.678 CC lib/nvme/nvme_transport.o 00:03:19.678 CC lib/nvme/nvme_discovery.o 00:03:19.678 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.678 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.678 CC lib/nvme/nvme_tcp.o 00:03:19.678 CC lib/nvme/nvme_opal.o 00:03:19.678 CC lib/nvme/nvme_io_msg.o 00:03:19.678 CC lib/nvme/nvme_poll_group.o 00:03:19.678 CC lib/nvme/nvme_zns.o 00:03:19.678 CC lib/nvme/nvme_stubs.o 00:03:19.678 CC lib/nvme/nvme_auth.o 00:03:19.678 CC lib/nvme/nvme_vfio_user.o 00:03:19.678 CC lib/nvme/nvme_cuse.o 00:03:19.678 CC lib/nvme/nvme_rdma.o 00:03:20.610 LIB libspdk_thread.a 00:03:20.610 SO libspdk_thread.so.10.1 00:03:20.610 SYMLINK libspdk_thread.so 00:03:20.867 CC lib/blob/blobstore.o 00:03:20.867 CC lib/virtio/virtio.o 00:03:20.867 CC lib/accel/accel.o 00:03:20.867 CC lib/init/json_config.o 00:03:20.867 CC lib/vfu_tgt/tgt_endpoint.o 00:03:20.867 CC lib/accel/accel_rpc.o 00:03:20.867 CC lib/virtio/virtio_vhost_user.o 00:03:20.867 CC lib/blob/request.o 00:03:20.867 CC lib/vfu_tgt/tgt_rpc.o 00:03:20.867 CC lib/init/subsystem.o 00:03:20.867 CC lib/accel/accel_sw.o 00:03:20.867 CC lib/blob/zeroes.o 00:03:20.867 CC lib/virtio/virtio_vfio_user.o 00:03:20.867 CC lib/init/subsystem_rpc.o 00:03:20.867 CC lib/blob/blob_bs_dev.o 00:03:20.867 CC lib/init/rpc.o 00:03:20.867 CC lib/virtio/virtio_pci.o 00:03:21.125 LIB libspdk_init.a 00:03:21.125 SO libspdk_init.so.5.0 00:03:21.125 LIB libspdk_virtio.a 00:03:21.125 LIB libspdk_vfu_tgt.a 00:03:21.125 SO libspdk_vfu_tgt.so.3.0 00:03:21.125 SYMLINK libspdk_init.so 00:03:21.125 SO libspdk_virtio.so.7.0 00:03:21.125 SYMLINK libspdk_vfu_tgt.so 00:03:21.125 SYMLINK libspdk_virtio.so 00:03:21.382 CC lib/event/app.o 00:03:21.382 CC lib/event/reactor.o 00:03:21.382 CC lib/event/log_rpc.o 00:03:21.382 CC lib/event/app_rpc.o 00:03:21.382 CC lib/event/scheduler_static.o 00:03:21.640 LIB libspdk_event.a 00:03:21.897 SO libspdk_event.so.14.0 00:03:21.897 LIB libspdk_accel.a 00:03:21.898 SYMLINK libspdk_event.so 00:03:21.898 SO libspdk_accel.so.15.1 00:03:21.898 SYMLINK libspdk_accel.so 00:03:22.155 LIB libspdk_nvme.a 00:03:22.155 CC lib/bdev/bdev.o 00:03:22.155 CC lib/bdev/bdev_rpc.o 00:03:22.155 CC lib/bdev/bdev_zone.o 00:03:22.155 CC lib/bdev/part.o 00:03:22.155 CC lib/bdev/scsi_nvme.o 00:03:22.155 SO libspdk_nvme.so.13.1 00:03:22.412 SYMLINK libspdk_nvme.so 00:03:23.820 LIB libspdk_blob.a 00:03:23.820 SO libspdk_blob.so.11.0 00:03:23.820 SYMLINK libspdk_blob.so 00:03:24.077 CC lib/lvol/lvol.o 00:03:24.077 CC lib/blobfs/blobfs.o 00:03:24.077 CC lib/blobfs/tree.o 00:03:24.642 LIB libspdk_bdev.a 00:03:24.642 SO libspdk_bdev.so.15.1 00:03:24.642 SYMLINK libspdk_bdev.so 00:03:24.905 LIB libspdk_blobfs.a 00:03:24.905 SO libspdk_blobfs.so.10.0 00:03:24.905 CC lib/nbd/nbd.o 00:03:24.905 CC lib/scsi/dev.o 00:03:24.905 CC lib/ublk/ublk.o 00:03:24.905 CC lib/nvmf/ctrlr.o 00:03:24.905 CC lib/nbd/nbd_rpc.o 00:03:24.905 CC lib/scsi/lun.o 00:03:24.905 CC lib/ublk/ublk_rpc.o 00:03:24.905 CC lib/nvmf/ctrlr_discovery.o 00:03:24.905 CC lib/ftl/ftl_core.o 00:03:24.905 CC lib/scsi/port.o 00:03:24.905 CC lib/nvmf/ctrlr_bdev.o 00:03:24.905 CC lib/ftl/ftl_init.o 00:03:24.905 CC lib/scsi/scsi.o 00:03:24.905 CC lib/nvmf/subsystem.o 00:03:24.905 CC lib/scsi/scsi_bdev.o 00:03:24.905 CC lib/ftl/ftl_layout.o 00:03:24.905 CC lib/scsi/scsi_pr.o 00:03:24.905 CC lib/nvmf/nvmf.o 00:03:24.905 CC lib/nvmf/nvmf_rpc.o 00:03:24.905 CC lib/ftl/ftl_io.o 00:03:24.905 CC lib/ftl/ftl_debug.o 00:03:24.905 CC lib/scsi/scsi_rpc.o 00:03:24.905 CC lib/nvmf/transport.o 00:03:24.905 CC lib/scsi/task.o 00:03:24.905 CC lib/nvmf/tcp.o 00:03:24.905 CC lib/ftl/ftl_sb.o 00:03:24.905 CC lib/nvmf/stubs.o 00:03:24.905 CC lib/ftl/ftl_l2p.o 00:03:24.905 CC lib/ftl/ftl_l2p_flat.o 00:03:24.905 CC lib/nvmf/mdns_server.o 00:03:24.905 CC lib/nvmf/vfio_user.o 00:03:24.905 CC lib/ftl/ftl_nv_cache.o 00:03:24.905 CC lib/ftl/ftl_band.o 00:03:24.905 CC lib/nvmf/rdma.o 00:03:24.905 CC lib/nvmf/auth.o 00:03:24.905 CC lib/ftl/ftl_band_ops.o 00:03:24.905 CC lib/ftl/ftl_writer.o 00:03:24.905 CC lib/ftl/ftl_rq.o 00:03:24.905 CC lib/ftl/ftl_reloc.o 00:03:24.905 CC lib/ftl/ftl_l2p_cache.o 00:03:24.905 CC lib/ftl/ftl_p2l.o 00:03:24.905 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.905 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.905 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.905 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.905 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.905 SYMLINK libspdk_blobfs.so 00:03:24.905 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.905 LIB libspdk_lvol.a 00:03:25.163 SO libspdk_lvol.so.10.0 00:03:25.163 SYMLINK libspdk_lvol.so 00:03:25.163 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.163 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.163 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.163 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.427 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.427 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.427 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:25.427 CC lib/ftl/utils/ftl_conf.o 00:03:25.427 CC lib/ftl/utils/ftl_md.o 00:03:25.427 CC lib/ftl/utils/ftl_mempool.o 00:03:25.427 CC lib/ftl/utils/ftl_bitmap.o 00:03:25.427 CC lib/ftl/utils/ftl_property.o 00:03:25.427 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:25.427 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:25.427 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:25.427 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:25.427 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:25.427 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:25.427 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:25.427 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:25.684 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:25.684 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:25.684 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.684 CC lib/ftl/base/ftl_base_dev.o 00:03:25.684 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.685 CC lib/ftl/ftl_trace.o 00:03:25.685 LIB libspdk_nbd.a 00:03:25.685 SO libspdk_nbd.so.7.0 00:03:25.943 SYMLINK libspdk_nbd.so 00:03:25.943 LIB libspdk_scsi.a 00:03:25.943 SO libspdk_scsi.so.9.0 00:03:25.943 LIB libspdk_ublk.a 00:03:25.943 SYMLINK libspdk_scsi.so 00:03:25.943 SO libspdk_ublk.so.3.0 00:03:25.943 SYMLINK libspdk_ublk.so 00:03:26.200 CC lib/iscsi/conn.o 00:03:26.200 CC lib/vhost/vhost.o 00:03:26.200 CC lib/iscsi/init_grp.o 00:03:26.200 CC lib/vhost/vhost_rpc.o 00:03:26.200 CC lib/vhost/vhost_scsi.o 00:03:26.200 CC lib/iscsi/iscsi.o 00:03:26.200 CC lib/vhost/vhost_blk.o 00:03:26.200 CC lib/vhost/rte_vhost_user.o 00:03:26.200 CC lib/iscsi/md5.o 00:03:26.200 CC lib/iscsi/param.o 00:03:26.200 CC lib/iscsi/portal_grp.o 00:03:26.200 CC lib/iscsi/tgt_node.o 00:03:26.200 CC lib/iscsi/iscsi_subsystem.o 00:03:26.200 CC lib/iscsi/iscsi_rpc.o 00:03:26.200 CC lib/iscsi/task.o 00:03:26.459 LIB libspdk_ftl.a 00:03:26.718 SO libspdk_ftl.so.9.0 00:03:26.976 SYMLINK libspdk_ftl.so 00:03:27.234 LIB libspdk_vhost.a 00:03:27.491 SO libspdk_vhost.so.8.0 00:03:27.491 SYMLINK libspdk_vhost.so 00:03:27.491 LIB libspdk_nvmf.a 00:03:27.491 LIB libspdk_iscsi.a 00:03:27.491 SO libspdk_iscsi.so.8.0 00:03:27.491 SO libspdk_nvmf.so.18.1 00:03:27.750 SYMLINK libspdk_iscsi.so 00:03:27.750 SYMLINK libspdk_nvmf.so 00:03:28.008 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.008 CC module/vfu_device/vfu_virtio.o 00:03:28.008 CC module/vfu_device/vfu_virtio_blk.o 00:03:28.008 CC module/vfu_device/vfu_virtio_scsi.o 00:03:28.009 CC module/vfu_device/vfu_virtio_rpc.o 00:03:28.266 CC module/sock/posix/posix.o 00:03:28.266 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.266 CC module/accel/iaa/accel_iaa.o 00:03:28.266 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.266 CC module/accel/error/accel_error.o 00:03:28.266 CC module/accel/dsa/accel_dsa.o 00:03:28.266 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.266 CC module/accel/error/accel_error_rpc.o 00:03:28.266 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.266 CC module/keyring/linux/keyring.o 00:03:28.266 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.266 CC module/keyring/linux/keyring_rpc.o 00:03:28.266 CC module/blob/bdev/blob_bdev.o 00:03:28.266 CC module/accel/ioat/accel_ioat.o 00:03:28.266 CC module/keyring/file/keyring.o 00:03:28.266 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.266 CC module/keyring/file/keyring_rpc.o 00:03:28.266 LIB libspdk_env_dpdk_rpc.a 00:03:28.266 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.266 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.266 LIB libspdk_keyring_linux.a 00:03:28.266 LIB libspdk_keyring_file.a 00:03:28.266 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.266 LIB libspdk_scheduler_gscheduler.a 00:03:28.266 SO libspdk_keyring_linux.so.1.0 00:03:28.266 SO libspdk_keyring_file.so.1.0 00:03:28.266 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.266 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.266 LIB libspdk_accel_error.a 00:03:28.266 LIB libspdk_accel_ioat.a 00:03:28.266 LIB libspdk_scheduler_dynamic.a 00:03:28.525 LIB libspdk_accel_iaa.a 00:03:28.525 SO libspdk_accel_error.so.2.0 00:03:28.525 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.525 SO libspdk_accel_ioat.so.6.0 00:03:28.525 SYMLINK libspdk_keyring_linux.so 00:03:28.525 SYMLINK libspdk_keyring_file.so 00:03:28.525 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.525 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.525 SO libspdk_accel_iaa.so.3.0 00:03:28.525 LIB libspdk_accel_dsa.a 00:03:28.525 SYMLINK libspdk_accel_error.so 00:03:28.525 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.525 SYMLINK libspdk_accel_ioat.so 00:03:28.525 LIB libspdk_blob_bdev.a 00:03:28.525 SYMLINK libspdk_accel_iaa.so 00:03:28.525 SO libspdk_accel_dsa.so.5.0 00:03:28.525 SO libspdk_blob_bdev.so.11.0 00:03:28.525 SYMLINK libspdk_accel_dsa.so 00:03:28.525 SYMLINK libspdk_blob_bdev.so 00:03:28.783 LIB libspdk_vfu_device.a 00:03:28.783 SO libspdk_vfu_device.so.3.0 00:03:28.783 CC module/bdev/error/vbdev_error.o 00:03:28.783 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.783 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.783 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.783 CC module/bdev/delay/vbdev_delay.o 00:03:28.783 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.783 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.783 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.783 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.783 CC module/bdev/nvme/bdev_nvme.o 00:03:28.783 CC module/bdev/gpt/gpt.o 00:03:28.783 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.783 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.783 CC module/bdev/split/vbdev_split.o 00:03:28.783 CC module/bdev/malloc/bdev_malloc.o 00:03:28.783 CC module/bdev/raid/bdev_raid.o 00:03:28.783 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.783 CC module/bdev/nvme/nvme_rpc.o 00:03:28.783 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.783 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.783 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:28.783 CC module/bdev/raid/bdev_raid_rpc.o 00:03:28.783 CC module/bdev/nvme/bdev_mdns_client.o 00:03:28.783 CC module/bdev/nvme/vbdev_opal.o 00:03:28.783 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.783 CC module/bdev/raid/bdev_raid_sb.o 00:03:28.783 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.783 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:28.783 CC module/bdev/raid/raid0.o 00:03:28.783 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:28.783 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:28.783 CC module/bdev/raid/raid1.o 00:03:28.783 CC module/bdev/raid/concat.o 00:03:28.783 CC module/bdev/null/bdev_null.o 00:03:28.783 CC module/bdev/ftl/bdev_ftl.o 00:03:28.783 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:28.783 CC module/bdev/null/bdev_null_rpc.o 00:03:28.783 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:28.783 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:28.783 CC module/bdev/aio/bdev_aio.o 00:03:28.783 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:28.783 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.783 SYMLINK libspdk_vfu_device.so 00:03:29.042 LIB libspdk_sock_posix.a 00:03:29.042 SO libspdk_sock_posix.so.6.0 00:03:29.042 SYMLINK libspdk_sock_posix.so 00:03:29.042 LIB libspdk_blobfs_bdev.a 00:03:29.301 LIB libspdk_bdev_gpt.a 00:03:29.301 SO libspdk_blobfs_bdev.so.6.0 00:03:29.301 SO libspdk_bdev_gpt.so.6.0 00:03:29.301 LIB libspdk_bdev_split.a 00:03:29.301 LIB libspdk_bdev_error.a 00:03:29.301 SYMLINK libspdk_blobfs_bdev.so 00:03:29.301 SO libspdk_bdev_split.so.6.0 00:03:29.301 SYMLINK libspdk_bdev_gpt.so 00:03:29.301 SO libspdk_bdev_error.so.6.0 00:03:29.301 LIB libspdk_bdev_null.a 00:03:29.301 SYMLINK libspdk_bdev_split.so 00:03:29.301 SO libspdk_bdev_null.so.6.0 00:03:29.301 LIB libspdk_bdev_passthru.a 00:03:29.301 SYMLINK libspdk_bdev_error.so 00:03:29.301 LIB libspdk_bdev_ftl.a 00:03:29.301 LIB libspdk_bdev_malloc.a 00:03:29.301 LIB libspdk_bdev_aio.a 00:03:29.301 SO libspdk_bdev_passthru.so.6.0 00:03:29.301 LIB libspdk_bdev_iscsi.a 00:03:29.301 SO libspdk_bdev_ftl.so.6.0 00:03:29.301 SO libspdk_bdev_malloc.so.6.0 00:03:29.301 LIB libspdk_bdev_delay.a 00:03:29.301 SYMLINK libspdk_bdev_null.so 00:03:29.301 SO libspdk_bdev_aio.so.6.0 00:03:29.301 SO libspdk_bdev_iscsi.so.6.0 00:03:29.301 LIB libspdk_bdev_zone_block.a 00:03:29.301 SO libspdk_bdev_delay.so.6.0 00:03:29.301 SYMLINK libspdk_bdev_passthru.so 00:03:29.560 SO libspdk_bdev_zone_block.so.6.0 00:03:29.560 SYMLINK libspdk_bdev_malloc.so 00:03:29.560 SYMLINK libspdk_bdev_ftl.so 00:03:29.560 SYMLINK libspdk_bdev_aio.so 00:03:29.560 SYMLINK libspdk_bdev_iscsi.so 00:03:29.560 SYMLINK libspdk_bdev_delay.so 00:03:29.560 SYMLINK libspdk_bdev_zone_block.so 00:03:29.560 LIB libspdk_bdev_lvol.a 00:03:29.560 LIB libspdk_bdev_virtio.a 00:03:29.560 SO libspdk_bdev_lvol.so.6.0 00:03:29.560 SO libspdk_bdev_virtio.so.6.0 00:03:29.560 SYMLINK libspdk_bdev_lvol.so 00:03:29.560 SYMLINK libspdk_bdev_virtio.so 00:03:29.817 LIB libspdk_bdev_raid.a 00:03:30.075 SO libspdk_bdev_raid.so.6.0 00:03:30.075 SYMLINK libspdk_bdev_raid.so 00:03:31.447 LIB libspdk_bdev_nvme.a 00:03:31.447 SO libspdk_bdev_nvme.so.7.0 00:03:31.447 SYMLINK libspdk_bdev_nvme.so 00:03:31.705 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.705 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:31.705 CC module/event/subsystems/sock/sock.o 00:03:31.705 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.705 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.705 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.705 CC module/event/subsystems/vmd/vmd.o 00:03:31.705 CC module/event/subsystems/keyring/keyring.o 00:03:31.705 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.705 LIB libspdk_event_keyring.a 00:03:31.705 LIB libspdk_event_vhost_blk.a 00:03:31.705 LIB libspdk_event_vfu_tgt.a 00:03:31.705 LIB libspdk_event_scheduler.a 00:03:31.705 LIB libspdk_event_vmd.a 00:03:31.705 LIB libspdk_event_sock.a 00:03:31.705 SO libspdk_event_keyring.so.1.0 00:03:31.705 LIB libspdk_event_iobuf.a 00:03:31.705 SO libspdk_event_vhost_blk.so.3.0 00:03:31.705 SO libspdk_event_vfu_tgt.so.3.0 00:03:31.705 SO libspdk_event_scheduler.so.4.0 00:03:31.705 SO libspdk_event_sock.so.5.0 00:03:31.705 SO libspdk_event_vmd.so.6.0 00:03:31.964 SO libspdk_event_iobuf.so.3.0 00:03:31.964 SYMLINK libspdk_event_keyring.so 00:03:31.964 SYMLINK libspdk_event_vfu_tgt.so 00:03:31.964 SYMLINK libspdk_event_vhost_blk.so 00:03:31.964 SYMLINK libspdk_event_scheduler.so 00:03:31.964 SYMLINK libspdk_event_sock.so 00:03:31.964 SYMLINK libspdk_event_vmd.so 00:03:31.964 SYMLINK libspdk_event_iobuf.so 00:03:31.964 CC module/event/subsystems/accel/accel.o 00:03:32.223 LIB libspdk_event_accel.a 00:03:32.223 SO libspdk_event_accel.so.6.0 00:03:32.223 SYMLINK libspdk_event_accel.so 00:03:32.483 CC module/event/subsystems/bdev/bdev.o 00:03:32.744 LIB libspdk_event_bdev.a 00:03:32.744 SO libspdk_event_bdev.so.6.0 00:03:32.744 SYMLINK libspdk_event_bdev.so 00:03:33.003 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.003 CC module/event/subsystems/scsi/scsi.o 00:03:33.003 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.003 CC module/event/subsystems/nbd/nbd.o 00:03:33.003 CC module/event/subsystems/ublk/ublk.o 00:03:33.003 LIB libspdk_event_nbd.a 00:03:33.003 LIB libspdk_event_ublk.a 00:03:33.003 SO libspdk_event_nbd.so.6.0 00:03:33.003 LIB libspdk_event_scsi.a 00:03:33.003 SO libspdk_event_ublk.so.3.0 00:03:33.003 SO libspdk_event_scsi.so.6.0 00:03:33.003 SYMLINK libspdk_event_nbd.so 00:03:33.003 SYMLINK libspdk_event_ublk.so 00:03:33.003 SYMLINK libspdk_event_scsi.so 00:03:33.259 LIB libspdk_event_nvmf.a 00:03:33.259 SO libspdk_event_nvmf.so.6.0 00:03:33.259 SYMLINK libspdk_event_nvmf.so 00:03:33.259 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.259 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.516 LIB libspdk_event_vhost_scsi.a 00:03:33.516 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.516 LIB libspdk_event_iscsi.a 00:03:33.516 SO libspdk_event_iscsi.so.6.0 00:03:33.516 SYMLINK libspdk_event_vhost_scsi.so 00:03:33.516 SYMLINK libspdk_event_iscsi.so 00:03:33.773 SO libspdk.so.6.0 00:03:33.773 SYMLINK libspdk.so 00:03:33.773 CC app/spdk_lspci/spdk_lspci.o 00:03:33.773 CXX app/trace/trace.o 00:03:33.773 CC app/trace_record/trace_record.o 00:03:33.773 CC test/rpc_client/rpc_client_test.o 00:03:33.773 CC app/spdk_top/spdk_top.o 00:03:33.773 TEST_HEADER include/spdk/accel.h 00:03:33.773 TEST_HEADER include/spdk/accel_module.h 00:03:33.773 TEST_HEADER include/spdk/assert.h 00:03:33.773 TEST_HEADER include/spdk/barrier.h 00:03:33.773 CC app/spdk_nvme_identify/identify.o 00:03:33.773 TEST_HEADER include/spdk/base64.h 00:03:33.773 CC app/spdk_nvme_discover/discovery_aer.o 00:03:33.773 CC app/spdk_nvme_perf/perf.o 00:03:33.773 TEST_HEADER include/spdk/bdev.h 00:03:33.773 TEST_HEADER include/spdk/bdev_module.h 00:03:33.773 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.773 TEST_HEADER include/spdk/bit_array.h 00:03:33.773 TEST_HEADER include/spdk/bit_pool.h 00:03:33.773 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.773 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.773 TEST_HEADER include/spdk/blobfs.h 00:03:33.773 TEST_HEADER include/spdk/blob.h 00:03:33.773 TEST_HEADER include/spdk/conf.h 00:03:33.773 TEST_HEADER include/spdk/cpuset.h 00:03:33.773 TEST_HEADER include/spdk/config.h 00:03:33.773 TEST_HEADER include/spdk/crc16.h 00:03:33.773 TEST_HEADER include/spdk/crc32.h 00:03:33.773 TEST_HEADER include/spdk/crc64.h 00:03:33.773 TEST_HEADER include/spdk/dif.h 00:03:33.773 TEST_HEADER include/spdk/dma.h 00:03:33.773 TEST_HEADER include/spdk/endian.h 00:03:33.773 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.773 TEST_HEADER include/spdk/env.h 00:03:33.773 TEST_HEADER include/spdk/event.h 00:03:33.773 TEST_HEADER include/spdk/fd_group.h 00:03:33.773 TEST_HEADER include/spdk/file.h 00:03:33.773 TEST_HEADER include/spdk/fd.h 00:03:33.773 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.773 TEST_HEADER include/spdk/ftl.h 00:03:33.773 TEST_HEADER include/spdk/hexlify.h 00:03:33.773 TEST_HEADER include/spdk/histogram_data.h 00:03:33.773 TEST_HEADER include/spdk/idxd.h 00:03:33.773 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.773 TEST_HEADER include/spdk/init.h 00:03:33.773 TEST_HEADER include/spdk/ioat.h 00:03:33.773 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.773 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.773 TEST_HEADER include/spdk/json.h 00:03:33.773 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.773 TEST_HEADER include/spdk/keyring.h 00:03:33.773 TEST_HEADER include/spdk/keyring_module.h 00:03:33.773 TEST_HEADER include/spdk/likely.h 00:03:33.773 TEST_HEADER include/spdk/log.h 00:03:33.773 TEST_HEADER include/spdk/lvol.h 00:03:33.773 TEST_HEADER include/spdk/mmio.h 00:03:33.773 TEST_HEADER include/spdk/memory.h 00:03:33.773 TEST_HEADER include/spdk/nbd.h 00:03:33.773 TEST_HEADER include/spdk/net.h 00:03:33.773 TEST_HEADER include/spdk/notify.h 00:03:33.773 TEST_HEADER include/spdk/nvme.h 00:03:33.773 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.773 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.773 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.773 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.773 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.773 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.773 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.773 TEST_HEADER include/spdk/nvmf.h 00:03:33.773 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.773 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.773 TEST_HEADER include/spdk/opal.h 00:03:33.773 TEST_HEADER include/spdk/opal_spec.h 00:03:33.773 TEST_HEADER include/spdk/pci_ids.h 00:03:33.773 TEST_HEADER include/spdk/pipe.h 00:03:33.773 TEST_HEADER include/spdk/queue.h 00:03:33.773 TEST_HEADER include/spdk/reduce.h 00:03:33.774 TEST_HEADER include/spdk/rpc.h 00:03:33.774 TEST_HEADER include/spdk/scsi.h 00:03:33.774 TEST_HEADER include/spdk/scheduler.h 00:03:33.774 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.774 TEST_HEADER include/spdk/sock.h 00:03:33.774 TEST_HEADER include/spdk/stdinc.h 00:03:33.774 TEST_HEADER include/spdk/string.h 00:03:33.774 TEST_HEADER include/spdk/thread.h 00:03:33.774 TEST_HEADER include/spdk/trace.h 00:03:33.774 TEST_HEADER include/spdk/trace_parser.h 00:03:33.774 TEST_HEADER include/spdk/tree.h 00:03:33.774 TEST_HEADER include/spdk/ublk.h 00:03:33.774 TEST_HEADER include/spdk/util.h 00:03:33.774 TEST_HEADER include/spdk/uuid.h 00:03:33.774 TEST_HEADER include/spdk/version.h 00:03:33.774 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.774 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.774 TEST_HEADER include/spdk/vhost.h 00:03:33.774 TEST_HEADER include/spdk/xor.h 00:03:33.774 TEST_HEADER include/spdk/vmd.h 00:03:33.774 TEST_HEADER include/spdk/zipf.h 00:03:34.038 CXX test/cpp_headers/accel.o 00:03:34.038 CXX test/cpp_headers/accel_module.o 00:03:34.038 CXX test/cpp_headers/assert.o 00:03:34.038 CXX test/cpp_headers/barrier.o 00:03:34.038 CXX test/cpp_headers/base64.o 00:03:34.038 CXX test/cpp_headers/bdev.o 00:03:34.038 CXX test/cpp_headers/bdev_module.o 00:03:34.038 CXX test/cpp_headers/bdev_zone.o 00:03:34.038 CXX test/cpp_headers/bit_array.o 00:03:34.038 CXX test/cpp_headers/bit_pool.o 00:03:34.038 CXX test/cpp_headers/blob_bdev.o 00:03:34.038 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.038 CXX test/cpp_headers/blobfs.o 00:03:34.038 CXX test/cpp_headers/blob.o 00:03:34.038 CXX test/cpp_headers/conf.o 00:03:34.038 CXX test/cpp_headers/config.o 00:03:34.038 CXX test/cpp_headers/cpuset.o 00:03:34.038 CXX test/cpp_headers/crc16.o 00:03:34.038 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.038 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.038 CC app/spdk_dd/spdk_dd.o 00:03:34.038 CC app/nvmf_tgt/nvmf_main.o 00:03:34.038 CXX test/cpp_headers/crc32.o 00:03:34.038 CC test/thread/poller_perf/poller_perf.o 00:03:34.038 CC test/app/histogram_perf/histogram_perf.o 00:03:34.038 CC app/spdk_tgt/spdk_tgt.o 00:03:34.038 CC examples/ioat/verify/verify.o 00:03:34.038 CC test/env/memory/memory_ut.o 00:03:34.038 CC test/app/stub/stub.o 00:03:34.038 CC test/app/jsoncat/jsoncat.o 00:03:34.038 CC examples/util/zipf/zipf.o 00:03:34.038 CC test/env/vtophys/vtophys.o 00:03:34.038 CC examples/ioat/perf/perf.o 00:03:34.038 CC app/fio/nvme/fio_plugin.o 00:03:34.038 CC test/env/pci/pci_ut.o 00:03:34.038 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.039 CC test/app/bdev_svc/bdev_svc.o 00:03:34.039 CC test/dma/test_dma/test_dma.o 00:03:34.039 CC app/fio/bdev/fio_plugin.o 00:03:34.039 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:34.298 LINK spdk_lspci 00:03:34.298 CC test/env/mem_callbacks/mem_callbacks.o 00:03:34.298 LINK rpc_client_test 00:03:34.298 LINK spdk_nvme_discover 00:03:34.298 LINK poller_perf 00:03:34.298 LINK histogram_perf 00:03:34.298 LINK jsoncat 00:03:34.298 LINK spdk_trace_record 00:03:34.298 LINK vtophys 00:03:34.298 LINK zipf 00:03:34.298 CXX test/cpp_headers/crc64.o 00:03:34.298 LINK nvmf_tgt 00:03:34.298 LINK interrupt_tgt 00:03:34.298 CXX test/cpp_headers/dif.o 00:03:34.298 CXX test/cpp_headers/dma.o 00:03:34.298 LINK env_dpdk_post_init 00:03:34.298 CXX test/cpp_headers/endian.o 00:03:34.298 CXX test/cpp_headers/env_dpdk.o 00:03:34.298 CXX test/cpp_headers/env.o 00:03:34.298 CXX test/cpp_headers/event.o 00:03:34.298 CXX test/cpp_headers/fd_group.o 00:03:34.298 CXX test/cpp_headers/fd.o 00:03:34.298 LINK iscsi_tgt 00:03:34.298 CXX test/cpp_headers/file.o 00:03:34.298 CXX test/cpp_headers/ftl.o 00:03:34.298 LINK stub 00:03:34.298 CXX test/cpp_headers/gpt_spec.o 00:03:34.298 CXX test/cpp_headers/hexlify.o 00:03:34.563 CXX test/cpp_headers/histogram_data.o 00:03:34.563 LINK bdev_svc 00:03:34.563 CXX test/cpp_headers/idxd.o 00:03:34.563 CXX test/cpp_headers/idxd_spec.o 00:03:34.563 LINK spdk_tgt 00:03:34.563 LINK verify 00:03:34.563 LINK ioat_perf 00:03:34.563 CXX test/cpp_headers/init.o 00:03:34.563 CXX test/cpp_headers/ioat.o 00:03:34.563 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.563 CXX test/cpp_headers/ioat_spec.o 00:03:34.563 CXX test/cpp_headers/iscsi_spec.o 00:03:34.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.563 CXX test/cpp_headers/json.o 00:03:34.563 CXX test/cpp_headers/jsonrpc.o 00:03:34.563 CXX test/cpp_headers/keyring.o 00:03:34.824 LINK spdk_dd 00:03:34.824 CXX test/cpp_headers/keyring_module.o 00:03:34.824 CXX test/cpp_headers/likely.o 00:03:34.824 CXX test/cpp_headers/log.o 00:03:34.824 CXX test/cpp_headers/lvol.o 00:03:34.824 CXX test/cpp_headers/memory.o 00:03:34.824 LINK spdk_trace 00:03:34.824 CXX test/cpp_headers/mmio.o 00:03:34.824 CXX test/cpp_headers/nbd.o 00:03:34.824 CXX test/cpp_headers/net.o 00:03:34.824 LINK pci_ut 00:03:34.824 CXX test/cpp_headers/notify.o 00:03:34.824 CXX test/cpp_headers/nvme.o 00:03:34.824 CXX test/cpp_headers/nvme_intel.o 00:03:34.824 CXX test/cpp_headers/nvme_ocssd.o 00:03:34.824 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:34.824 CXX test/cpp_headers/nvme_spec.o 00:03:34.824 LINK test_dma 00:03:34.824 CXX test/cpp_headers/nvme_zns.o 00:03:34.824 CXX test/cpp_headers/nvmf_cmd.o 00:03:34.824 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.824 CXX test/cpp_headers/nvmf.o 00:03:34.824 CXX test/cpp_headers/nvmf_spec.o 00:03:34.824 CXX test/cpp_headers/nvmf_transport.o 00:03:34.824 CXX test/cpp_headers/opal.o 00:03:34.824 CXX test/cpp_headers/opal_spec.o 00:03:35.090 LINK nvme_fuzz 00:03:35.090 CC test/event/event_perf/event_perf.o 00:03:35.090 CXX test/cpp_headers/pci_ids.o 00:03:35.090 CC test/event/reactor/reactor.o 00:03:35.090 CC test/event/reactor_perf/reactor_perf.o 00:03:35.090 CXX test/cpp_headers/pipe.o 00:03:35.090 CXX test/cpp_headers/queue.o 00:03:35.090 CXX test/cpp_headers/reduce.o 00:03:35.090 CC test/event/app_repeat/app_repeat.o 00:03:35.090 CXX test/cpp_headers/rpc.o 00:03:35.090 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.090 LINK spdk_nvme 00:03:35.090 CC examples/sock/hello_world/hello_sock.o 00:03:35.090 CC examples/idxd/perf/perf.o 00:03:35.090 LINK spdk_bdev 00:03:35.090 CXX test/cpp_headers/scheduler.o 00:03:35.090 CXX test/cpp_headers/scsi.o 00:03:35.090 CXX test/cpp_headers/scsi_spec.o 00:03:35.090 CC test/event/scheduler/scheduler.o 00:03:35.090 CXX test/cpp_headers/sock.o 00:03:35.090 CC examples/thread/thread/thread_ex.o 00:03:35.090 CXX test/cpp_headers/stdinc.o 00:03:35.090 CXX test/cpp_headers/string.o 00:03:35.349 CXX test/cpp_headers/thread.o 00:03:35.349 CXX test/cpp_headers/trace.o 00:03:35.349 CXX test/cpp_headers/trace_parser.o 00:03:35.349 CXX test/cpp_headers/tree.o 00:03:35.349 CXX test/cpp_headers/ublk.o 00:03:35.349 CXX test/cpp_headers/util.o 00:03:35.349 LINK reactor_perf 00:03:35.349 CC examples/vmd/led/led.o 00:03:35.349 LINK reactor 00:03:35.349 LINK event_perf 00:03:35.349 CXX test/cpp_headers/uuid.o 00:03:35.349 CXX test/cpp_headers/version.o 00:03:35.349 CXX test/cpp_headers/vfio_user_pci.o 00:03:35.349 CXX test/cpp_headers/vfio_user_spec.o 00:03:35.349 CXX test/cpp_headers/vhost.o 00:03:35.349 CXX test/cpp_headers/vmd.o 00:03:35.349 CXX test/cpp_headers/xor.o 00:03:35.349 CXX test/cpp_headers/zipf.o 00:03:35.349 LINK mem_callbacks 00:03:35.349 LINK spdk_nvme_perf 00:03:35.349 LINK lsvmd 00:03:35.349 CC app/vhost/vhost.o 00:03:35.349 LINK app_repeat 00:03:35.349 LINK spdk_nvme_identify 00:03:35.609 LINK vhost_fuzz 00:03:35.609 CC test/nvme/e2edp/nvme_dp.o 00:03:35.609 CC test/nvme/sgl/sgl.o 00:03:35.609 CC test/nvme/reset/reset.o 00:03:35.609 CC test/nvme/startup/startup.o 00:03:35.609 CC test/nvme/err_injection/err_injection.o 00:03:35.609 LINK spdk_top 00:03:35.609 CC test/nvme/aer/aer.o 00:03:35.609 CC test/nvme/overhead/overhead.o 00:03:35.609 CC test/nvme/reserve/reserve.o 00:03:35.609 LINK hello_sock 00:03:35.609 LINK scheduler 00:03:35.609 CC test/accel/dif/dif.o 00:03:35.609 CC test/blobfs/mkfs/mkfs.o 00:03:35.609 LINK led 00:03:35.609 CC test/nvme/boot_partition/boot_partition.o 00:03:35.609 CC test/nvme/simple_copy/simple_copy.o 00:03:35.609 LINK thread 00:03:35.609 CC test/nvme/connect_stress/connect_stress.o 00:03:35.609 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:35.609 CC test/nvme/compliance/nvme_compliance.o 00:03:35.609 CC test/nvme/fused_ordering/fused_ordering.o 00:03:35.609 CC test/lvol/esnap/esnap.o 00:03:35.609 CC test/nvme/fdp/fdp.o 00:03:35.609 CC test/nvme/cuse/cuse.o 00:03:35.867 LINK vhost 00:03:35.867 LINK idxd_perf 00:03:35.867 LINK connect_stress 00:03:35.867 LINK startup 00:03:35.867 LINK doorbell_aers 00:03:35.867 LINK mkfs 00:03:35.867 LINK err_injection 00:03:35.867 LINK simple_copy 00:03:35.867 LINK reserve 00:03:35.867 LINK boot_partition 00:03:36.125 LINK overhead 00:03:36.125 LINK reset 00:03:36.125 LINK fused_ordering 00:03:36.125 LINK aer 00:03:36.125 LINK nvme_dp 00:03:36.125 LINK memory_ut 00:03:36.125 LINK sgl 00:03:36.125 LINK nvme_compliance 00:03:36.125 CC examples/nvme/hello_world/hello_world.o 00:03:36.125 CC examples/nvme/reconnect/reconnect.o 00:03:36.125 CC examples/nvme/hotplug/hotplug.o 00:03:36.125 CC examples/nvme/arbitration/arbitration.o 00:03:36.125 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:36.125 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:36.125 CC examples/nvme/abort/abort.o 00:03:36.125 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:36.125 LINK fdp 00:03:36.125 CC examples/accel/perf/accel_perf.o 00:03:36.125 LINK dif 00:03:36.403 CC examples/blob/cli/blobcli.o 00:03:36.403 CC examples/blob/hello_world/hello_blob.o 00:03:36.403 LINK pmr_persistence 00:03:36.403 LINK cmb_copy 00:03:36.403 LINK hotplug 00:03:36.403 LINK hello_world 00:03:36.403 LINK arbitration 00:03:36.661 LINK hello_blob 00:03:36.661 LINK reconnect 00:03:36.662 LINK abort 00:03:36.662 CC test/bdev/bdevio/bdevio.o 00:03:36.662 LINK nvme_manage 00:03:36.662 LINK accel_perf 00:03:36.920 LINK blobcli 00:03:36.920 LINK iscsi_fuzz 00:03:37.177 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.177 LINK bdevio 00:03:37.177 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.177 LINK cuse 00:03:37.435 LINK hello_bdev 00:03:37.999 LINK bdevperf 00:03:38.257 CC examples/nvmf/nvmf/nvmf.o 00:03:38.514 LINK nvmf 00:03:41.037 LINK esnap 00:03:41.297 00:03:41.297 real 0m49.012s 00:03:41.297 user 10m9.074s 00:03:41.297 sys 2m29.295s 00:03:41.297 15:40:10 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:41.297 15:40:10 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.297 ************************************ 00:03:41.297 END TEST make 00:03:41.297 ************************************ 00:03:41.297 15:40:10 -- common/autotest_common.sh@1142 -- $ return 0 00:03:41.297 15:40:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.297 15:40:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.297 15:40:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.297 15:40:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.297 15:40:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.297 15:40:10 -- pm/common@44 -- $ pid=4012718 00:03:41.297 15:40:10 -- pm/common@50 -- $ kill -TERM 4012718 00:03:41.297 15:40:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.297 15:40:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.297 15:40:10 -- pm/common@44 -- $ pid=4012720 00:03:41.297 15:40:10 -- pm/common@50 -- $ kill -TERM 4012720 00:03:41.297 15:40:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.297 15:40:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:41.297 15:40:10 -- pm/common@44 -- $ pid=4012722 00:03:41.297 15:40:10 -- pm/common@50 -- $ kill -TERM 4012722 00:03:41.297 15:40:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.297 15:40:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:41.297 15:40:10 -- pm/common@44 -- $ pid=4012749 00:03:41.297 15:40:10 -- pm/common@50 -- $ sudo -E kill -TERM 4012749 00:03:41.297 15:40:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:41.297 15:40:10 -- nvmf/common.sh@7 -- # uname -s 00:03:41.297 15:40:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.297 15:40:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.297 15:40:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.297 15:40:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.297 15:40:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.297 15:40:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.297 15:40:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.298 15:40:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.298 15:40:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.298 15:40:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.298 15:40:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:41.298 15:40:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:41.298 15:40:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.298 15:40:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.298 15:40:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:41.298 15:40:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.298 15:40:10 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:41.298 15:40:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.298 15:40:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.298 15:40:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.298 15:40:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.298 15:40:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.298 15:40:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.298 15:40:10 -- paths/export.sh@5 -- # export PATH 00:03:41.298 15:40:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.298 15:40:10 -- nvmf/common.sh@47 -- # : 0 00:03:41.298 15:40:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:41.298 15:40:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:41.298 15:40:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.298 15:40:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.298 15:40:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.298 15:40:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:41.298 15:40:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:41.298 15:40:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:41.298 15:40:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.298 15:40:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.298 15:40:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.298 15:40:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.298 15:40:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:41.298 15:40:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.298 15:40:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:41.298 15:40:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.298 15:40:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.298 15:40:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.298 15:40:10 -- spdk/autotest.sh@48 -- # udevadm_pid=4068803 00:03:41.298 15:40:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.298 15:40:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.298 15:40:10 -- pm/common@17 -- # local monitor 00:03:41.298 15:40:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.298 15:40:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.298 15:40:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.298 15:40:10 -- pm/common@21 -- # date +%s 00:03:41.298 15:40:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.298 15:40:10 -- pm/common@21 -- # date +%s 00:03:41.298 15:40:10 -- pm/common@25 -- # sleep 1 00:03:41.298 15:40:10 -- pm/common@21 -- # date +%s 00:03:41.298 15:40:10 -- pm/common@21 -- # date +%s 00:03:41.298 15:40:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791610 00:03:41.298 15:40:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791610 00:03:41.298 15:40:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791610 00:03:41.298 15:40:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791610 00:03:41.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791610_collect-vmstat.pm.log 00:03:41.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791610_collect-cpu-load.pm.log 00:03:41.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791610_collect-cpu-temp.pm.log 00:03:41.298 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791610_collect-bmc-pm.bmc.pm.log 00:03:42.236 15:40:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.236 15:40:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.236 15:40:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.236 15:40:11 -- common/autotest_common.sh@10 -- # set +x 00:03:42.236 15:40:11 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.236 15:40:11 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:42.236 15:40:11 -- common/autotest_common.sh@10 -- # set +x 00:03:42.236 15:40:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:42.236 15:40:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.236 15:40:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.236 15:40:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:42.236 15:40:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:42.236 15:40:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.236 15:40:11 -- common/autotest_common.sh@1455 -- # uname 00:03:42.236 15:40:11 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:42.236 15:40:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.236 15:40:11 -- common/autotest_common.sh@1475 -- # uname 00:03:42.236 15:40:11 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:42.236 15:40:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:42.236 15:40:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:42.236 15:40:11 -- spdk/autotest.sh@72 -- # hash lcov 00:03:42.236 15:40:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:42.236 15:40:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:42.236 --rc lcov_branch_coverage=1 00:03:42.236 --rc lcov_function_coverage=1 00:03:42.236 --rc genhtml_branch_coverage=1 00:03:42.236 --rc genhtml_function_coverage=1 00:03:42.236 --rc genhtml_legend=1 00:03:42.236 --rc geninfo_all_blocks=1 00:03:42.236 ' 00:03:42.236 15:40:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:42.236 --rc lcov_branch_coverage=1 00:03:42.236 --rc lcov_function_coverage=1 00:03:42.236 --rc genhtml_branch_coverage=1 00:03:42.236 --rc genhtml_function_coverage=1 00:03:42.236 --rc genhtml_legend=1 00:03:42.236 --rc geninfo_all_blocks=1 00:03:42.236 ' 00:03:42.236 15:40:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:42.236 --rc lcov_branch_coverage=1 00:03:42.236 --rc lcov_function_coverage=1 00:03:42.236 --rc genhtml_branch_coverage=1 00:03:42.236 --rc genhtml_function_coverage=1 00:03:42.236 --rc genhtml_legend=1 00:03:42.236 --rc geninfo_all_blocks=1 00:03:42.236 --no-external' 00:03:42.236 15:40:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:42.236 --rc lcov_branch_coverage=1 00:03:42.236 --rc lcov_function_coverage=1 00:03:42.236 --rc genhtml_branch_coverage=1 00:03:42.236 --rc genhtml_function_coverage=1 00:03:42.236 --rc genhtml_legend=1 00:03:42.236 --rc geninfo_all_blocks=1 00:03:42.236 --no-external' 00:03:42.236 15:40:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:42.495 lcov: LCOV version 1.14 00:03:42.495 15:40:12 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:57.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:57.356 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:12.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:12.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:12.225 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:12.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:12.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:12.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:16.449 15:40:45 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:16.449 15:40:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:16.449 15:40:45 -- common/autotest_common.sh@10 -- # set +x 00:04:16.449 15:40:45 -- spdk/autotest.sh@91 -- # rm -f 00:04:16.449 15:40:45 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.385 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:17.385 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:17.385 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:17.385 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:17.385 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:17.645 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:17.645 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:17.645 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:17.645 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:04:17.645 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:17.645 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:17.645 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:17.645 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:17.645 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:17.645 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:17.645 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:17.645 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:17.645 15:40:47 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:17.645 15:40:47 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:17.645 15:40:47 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:17.645 15:40:47 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:17.645 15:40:47 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:17.645 15:40:47 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:17.645 15:40:47 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:17.645 15:40:47 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.645 15:40:47 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:17.645 15:40:47 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:17.645 15:40:47 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.645 15:40:47 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:17.645 15:40:47 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:17.645 15:40:47 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:17.645 15:40:47 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:17.904 No valid GPT data, bailing 00:04:17.904 15:40:47 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.904 15:40:47 -- scripts/common.sh@391 -- # pt= 00:04:17.904 15:40:47 -- scripts/common.sh@392 -- # return 1 00:04:17.904 15:40:47 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:17.904 1+0 records in 00:04:17.904 1+0 records out 00:04:17.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00483988 s, 217 MB/s 00:04:17.904 15:40:47 -- spdk/autotest.sh@118 -- # sync 00:04:17.904 15:40:47 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:17.904 15:40:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:17.904 15:40:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:19.805 15:40:49 -- spdk/autotest.sh@124 -- # uname -s 00:04:19.805 15:40:49 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:19.805 15:40:49 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:19.805 15:40:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.805 15:40:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.805 15:40:49 -- common/autotest_common.sh@10 -- # set +x 00:04:19.805 ************************************ 00:04:19.805 START TEST setup.sh 00:04:19.805 ************************************ 00:04:19.805 15:40:49 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:19.805 * Looking for test storage... 00:04:19.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.805 15:40:49 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:19.805 15:40:49 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:19.805 15:40:49 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:19.805 15:40:49 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.805 15:40:49 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.805 15:40:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.805 ************************************ 00:04:19.805 START TEST acl 00:04:19.805 ************************************ 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:19.805 * Looking for test storage... 00:04:19.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.805 15:40:49 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.805 15:40:49 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:19.805 15:40:49 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:19.805 15:40:49 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:19.805 15:40:49 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:19.805 15:40:49 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:19.805 15:40:49 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:19.805 15:40:49 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.805 15:40:49 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.184 15:40:50 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:21.184 15:40:50 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:21.184 15:40:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.184 15:40:50 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:21.184 15:40:50 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.184 15:40:50 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:22.562 Hugepages 00:04:22.562 node hugesize free / total 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.562 00:04:22.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.562 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:22.563 15:40:52 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:22.563 15:40:52 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.563 15:40:52 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.563 15:40:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:22.563 ************************************ 00:04:22.563 START TEST denied 00:04:22.563 ************************************ 00:04:22.563 15:40:52 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:22.563 15:40:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:04:22.563 15:40:52 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:22.563 15:40:52 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:04:22.563 15:40:52 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.563 15:40:52 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.466 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.466 15:40:53 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:27.001 00:04:27.001 real 0m4.093s 00:04:27.001 user 0m1.203s 00:04:27.001 sys 0m1.903s 00:04:27.001 15:40:56 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.001 15:40:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:27.001 ************************************ 00:04:27.001 END TEST denied 00:04:27.001 ************************************ 00:04:27.001 15:40:56 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:27.001 15:40:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:27.001 15:40:56 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.001 15:40:56 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.001 15:40:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:27.001 ************************************ 00:04:27.001 START TEST allowed 00:04:27.001 ************************************ 00:04:27.001 15:40:56 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:27.001 15:40:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:04:27.001 15:40:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:27.001 15:40:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:04:27.001 15:40:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.001 15:40:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.539 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:29.539 15:40:58 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:29.539 15:40:58 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:29.539 15:40:58 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:29.539 15:40:58 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.539 15:40:58 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.913 00:04:30.913 real 0m4.047s 00:04:30.913 user 0m1.053s 00:04:30.913 sys 0m1.903s 00:04:30.913 15:41:00 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.913 15:41:00 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:30.913 ************************************ 00:04:30.913 END TEST allowed 00:04:30.913 ************************************ 00:04:30.913 15:41:00 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:30.913 00:04:30.913 real 0m11.145s 00:04:30.913 user 0m3.395s 00:04:30.913 sys 0m5.747s 00:04:30.913 15:41:00 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.913 15:41:00 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:30.913 ************************************ 00:04:30.913 END TEST acl 00:04:30.913 ************************************ 00:04:30.913 15:41:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:30.913 15:41:00 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:30.913 15:41:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.913 15:41:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.913 15:41:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.913 ************************************ 00:04:30.913 START TEST hugepages 00:04:30.913 ************************************ 00:04:30.913 15:41:00 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:30.913 * Looking for test storage... 00:04:30.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.913 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 39347900 kB' 'MemAvailable: 42922800 kB' 'Buffers: 2704 kB' 'Cached: 14533220 kB' 'SwapCached: 0 kB' 'Active: 11535920 kB' 'Inactive: 3526304 kB' 'Active(anon): 11094160 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529728 kB' 'Mapped: 211280 kB' 'Shmem: 10567860 kB' 'KReclaimable: 200916 kB' 'Slab: 571636 kB' 'SReclaimable: 200916 kB' 'SUnreclaim: 370720 kB' 'KernelStack: 12896 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562320 kB' 'Committed_AS: 12228756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.914 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:30.915 15:41:00 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:30.915 15:41:00 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.915 15:41:00 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.915 15:41:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:30.915 ************************************ 00:04:30.915 START TEST default_setup 00:04:30.915 ************************************ 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.915 15:41:00 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.291 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.291 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.291 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.291 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.291 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.291 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.291 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.291 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.291 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.270 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.538 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41445536 kB' 'MemAvailable: 45020348 kB' 'Buffers: 2704 kB' 'Cached: 14533312 kB' 'SwapCached: 0 kB' 'Active: 11553668 kB' 'Inactive: 3526304 kB' 'Active(anon): 11111908 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547140 kB' 'Mapped: 210896 kB' 'Shmem: 10567952 kB' 'KReclaimable: 200740 kB' 'Slab: 571288 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370548 kB' 'KernelStack: 12768 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.539 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41448588 kB' 'MemAvailable: 45023400 kB' 'Buffers: 2704 kB' 'Cached: 14533312 kB' 'SwapCached: 0 kB' 'Active: 11554240 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112480 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547792 kB' 'Mapped: 211128 kB' 'Shmem: 10567952 kB' 'KReclaimable: 200740 kB' 'Slab: 571336 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370596 kB' 'KernelStack: 12800 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.540 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.541 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.542 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41448960 kB' 'MemAvailable: 45023772 kB' 'Buffers: 2704 kB' 'Cached: 14533332 kB' 'SwapCached: 0 kB' 'Active: 11553496 kB' 'Inactive: 3526304 kB' 'Active(anon): 11111736 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547000 kB' 'Mapped: 210888 kB' 'Shmem: 10567972 kB' 'KReclaimable: 200740 kB' 'Slab: 571408 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370668 kB' 'KernelStack: 12832 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.543 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.544 nr_hugepages=1024 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.544 resv_hugepages=0 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.544 surplus_hugepages=0 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.544 anon_hugepages=0 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.544 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41448960 kB' 'MemAvailable: 45023772 kB' 'Buffers: 2704 kB' 'Cached: 14533356 kB' 'SwapCached: 0 kB' 'Active: 11553500 kB' 'Inactive: 3526304 kB' 'Active(anon): 11111740 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546968 kB' 'Mapped: 210888 kB' 'Shmem: 10567996 kB' 'KReclaimable: 200740 kB' 'Slab: 571400 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370660 kB' 'KernelStack: 12816 kB' 'PageTables: 8368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.545 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18987804 kB' 'MemUsed: 13842080 kB' 'SwapCached: 0 kB' 'Active: 7350196 kB' 'Inactive: 3279424 kB' 'Active(anon): 7058620 kB' 'Inactive(anon): 0 kB' 'Active(file): 291576 kB' 'Inactive(file): 3279424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10340712 kB' 'Mapped: 165292 kB' 'AnonPages: 292000 kB' 'Shmem: 6769712 kB' 'KernelStack: 7960 kB' 'PageTables: 4908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96112 kB' 'Slab: 291652 kB' 'SReclaimable: 96112 kB' 'SUnreclaim: 195540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.546 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:33.547 node0=1024 expecting 1024 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.547 00:04:33.547 real 0m2.573s 00:04:33.547 user 0m0.671s 00:04:33.547 sys 0m0.978s 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.547 15:41:03 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:33.547 ************************************ 00:04:33.547 END TEST default_setup 00:04:33.547 ************************************ 00:04:33.547 15:41:03 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:33.547 15:41:03 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:33.547 15:41:03 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.547 15:41:03 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.547 15:41:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.547 ************************************ 00:04:33.547 START TEST per_node_1G_alloc 00:04:33.547 ************************************ 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.547 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.548 15:41:03 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:34.926 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.926 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.926 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.926 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.926 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.926 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.926 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.926 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.926 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.926 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:34.926 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.926 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.926 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.926 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.926 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.926 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.926 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41442412 kB' 'MemAvailable: 45017224 kB' 'Buffers: 2704 kB' 'Cached: 14533432 kB' 'SwapCached: 0 kB' 'Active: 11553660 kB' 'Inactive: 3526304 kB' 'Active(anon): 11111900 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547040 kB' 'Mapped: 210912 kB' 'Shmem: 10568072 kB' 'KReclaimable: 200740 kB' 'Slab: 571464 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370724 kB' 'KernelStack: 12800 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197240 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.926 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.927 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41442868 kB' 'MemAvailable: 45017680 kB' 'Buffers: 2704 kB' 'Cached: 14533440 kB' 'SwapCached: 0 kB' 'Active: 11553888 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112128 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547324 kB' 'Mapped: 210904 kB' 'Shmem: 10568080 kB' 'KReclaimable: 200740 kB' 'Slab: 571460 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370720 kB' 'KernelStack: 12864 kB' 'PageTables: 8440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.928 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.929 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41442904 kB' 'MemAvailable: 45017716 kB' 'Buffers: 2704 kB' 'Cached: 14533456 kB' 'SwapCached: 0 kB' 'Active: 11553788 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112028 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547184 kB' 'Mapped: 210904 kB' 'Shmem: 10568096 kB' 'KReclaimable: 200740 kB' 'Slab: 571496 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370756 kB' 'KernelStack: 12864 kB' 'PageTables: 8420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.930 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.931 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.932 nr_hugepages=1024 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.932 resv_hugepages=0 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.932 surplus_hugepages=0 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.932 anon_hugepages=0 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41442904 kB' 'MemAvailable: 45017716 kB' 'Buffers: 2704 kB' 'Cached: 14533456 kB' 'SwapCached: 0 kB' 'Active: 11553932 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112172 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547372 kB' 'Mapped: 210904 kB' 'Shmem: 10568096 kB' 'KReclaimable: 200740 kB' 'Slab: 571496 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370756 kB' 'KernelStack: 12896 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.932 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.933 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20027352 kB' 'MemUsed: 12802532 kB' 'SwapCached: 0 kB' 'Active: 7350812 kB' 'Inactive: 3279424 kB' 'Active(anon): 7059236 kB' 'Inactive(anon): 0 kB' 'Active(file): 291576 kB' 'Inactive(file): 3279424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10340796 kB' 'Mapped: 165312 kB' 'AnonPages: 292644 kB' 'Shmem: 6769796 kB' 'KernelStack: 7912 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96112 kB' 'Slab: 291564 kB' 'SReclaimable: 96112 kB' 'SUnreclaim: 195452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.934 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 21416084 kB' 'MemUsed: 6295768 kB' 'SwapCached: 0 kB' 'Active: 4202636 kB' 'Inactive: 246880 kB' 'Active(anon): 4052452 kB' 'Inactive(anon): 0 kB' 'Active(file): 150184 kB' 'Inactive(file): 246880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4195412 kB' 'Mapped: 45596 kB' 'AnonPages: 254208 kB' 'Shmem: 3798348 kB' 'KernelStack: 4872 kB' 'PageTables: 3412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104628 kB' 'Slab: 279928 kB' 'SReclaimable: 104628 kB' 'SUnreclaim: 175300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.935 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.936 node0=512 expecting 512 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.936 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:34.936 node1=512 expecting 512 00:04:34.937 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.937 00:04:34.937 real 0m1.446s 00:04:34.937 user 0m0.606s 00:04:34.937 sys 0m0.801s 00:04:34.937 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.937 15:41:04 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.937 ************************************ 00:04:34.937 END TEST per_node_1G_alloc 00:04:34.937 ************************************ 00:04:35.195 15:41:04 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:35.195 15:41:04 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:35.195 15:41:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.195 15:41:04 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.195 15:41:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.195 ************************************ 00:04:35.195 START TEST even_2G_alloc 00:04:35.195 ************************************ 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.195 15:41:04 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.582 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.582 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.582 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.582 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.582 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.582 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.582 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.582 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:36.582 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.582 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:36.582 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.582 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.582 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.582 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.582 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.582 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.582 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41460688 kB' 'MemAvailable: 45035500 kB' 'Buffers: 2704 kB' 'Cached: 14533564 kB' 'SwapCached: 0 kB' 'Active: 11554040 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112280 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547284 kB' 'Mapped: 211016 kB' 'Shmem: 10568204 kB' 'KReclaimable: 200740 kB' 'Slab: 571368 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370628 kB' 'KernelStack: 12880 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.582 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.583 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41461124 kB' 'MemAvailable: 45035936 kB' 'Buffers: 2704 kB' 'Cached: 14533564 kB' 'SwapCached: 0 kB' 'Active: 11554476 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112716 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547720 kB' 'Mapped: 211016 kB' 'Shmem: 10568204 kB' 'KReclaimable: 200740 kB' 'Slab: 571368 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370628 kB' 'KernelStack: 12880 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197176 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.584 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.585 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41461884 kB' 'MemAvailable: 45036696 kB' 'Buffers: 2704 kB' 'Cached: 14533584 kB' 'SwapCached: 0 kB' 'Active: 11554076 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112316 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547248 kB' 'Mapped: 210920 kB' 'Shmem: 10568224 kB' 'KReclaimable: 200740 kB' 'Slab: 571324 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370584 kB' 'KernelStack: 12864 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.586 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.587 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.587 nr_hugepages=1024 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.588 resv_hugepages=0 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.588 surplus_hugepages=0 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.588 anon_hugepages=0 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41461884 kB' 'MemAvailable: 45036696 kB' 'Buffers: 2704 kB' 'Cached: 14533608 kB' 'SwapCached: 0 kB' 'Active: 11554008 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112248 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547212 kB' 'Mapped: 210920 kB' 'Shmem: 10568248 kB' 'KReclaimable: 200740 kB' 'Slab: 571324 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370584 kB' 'KernelStack: 12848 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12249968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197192 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.588 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.589 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20033712 kB' 'MemUsed: 12796172 kB' 'SwapCached: 0 kB' 'Active: 7350512 kB' 'Inactive: 3279424 kB' 'Active(anon): 7058936 kB' 'Inactive(anon): 0 kB' 'Active(file): 291576 kB' 'Inactive(file): 3279424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10340848 kB' 'Mapped: 165324 kB' 'AnonPages: 292184 kB' 'Shmem: 6769848 kB' 'KernelStack: 7960 kB' 'PageTables: 4904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96112 kB' 'Slab: 291552 kB' 'SReclaimable: 96112 kB' 'SUnreclaim: 195440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.590 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 21428580 kB' 'MemUsed: 6283272 kB' 'SwapCached: 0 kB' 'Active: 4203552 kB' 'Inactive: 246880 kB' 'Active(anon): 4053368 kB' 'Inactive(anon): 0 kB' 'Active(file): 150184 kB' 'Inactive(file): 246880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4195508 kB' 'Mapped: 45596 kB' 'AnonPages: 255024 kB' 'Shmem: 3798444 kB' 'KernelStack: 4888 kB' 'PageTables: 3452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104628 kB' 'Slab: 279772 kB' 'SReclaimable: 104628 kB' 'SUnreclaim: 175144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.591 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:36.592 node0=512 expecting 512 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:36.592 node1=512 expecting 512 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:36.592 00:04:36.592 real 0m1.565s 00:04:36.592 user 0m0.649s 00:04:36.592 sys 0m0.879s 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.592 15:41:06 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.592 ************************************ 00:04:36.593 END TEST even_2G_alloc 00:04:36.593 ************************************ 00:04:36.593 15:41:06 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:36.593 15:41:06 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:36.593 15:41:06 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.593 15:41:06 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.593 15:41:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.593 ************************************ 00:04:36.593 START TEST odd_alloc 00:04:36.593 ************************************ 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:36.593 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.852 15:41:06 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.798 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.798 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.798 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.798 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.798 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.798 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.798 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.798 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.798 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.798 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.798 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.798 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.798 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.798 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.798 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.798 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.061 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.061 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:38.061 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.061 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.061 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.061 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.061 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.061 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41475060 kB' 'MemAvailable: 45049872 kB' 'Buffers: 2704 kB' 'Cached: 14533696 kB' 'SwapCached: 0 kB' 'Active: 11551076 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109316 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544224 kB' 'Mapped: 210152 kB' 'Shmem: 10568336 kB' 'KReclaimable: 200740 kB' 'Slab: 571012 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370272 kB' 'KernelStack: 12816 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 12236364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.062 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41475804 kB' 'MemAvailable: 45050616 kB' 'Buffers: 2704 kB' 'Cached: 14533696 kB' 'SwapCached: 0 kB' 'Active: 11550960 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109200 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544136 kB' 'Mapped: 210212 kB' 'Shmem: 10568336 kB' 'KReclaimable: 200740 kB' 'Slab: 570984 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370244 kB' 'KernelStack: 12832 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 12236380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.063 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.064 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41475808 kB' 'MemAvailable: 45050620 kB' 'Buffers: 2704 kB' 'Cached: 14533720 kB' 'SwapCached: 0 kB' 'Active: 11550556 kB' 'Inactive: 3526304 kB' 'Active(anon): 11108796 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543676 kB' 'Mapped: 210128 kB' 'Shmem: 10568360 kB' 'KReclaimable: 200740 kB' 'Slab: 571028 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370288 kB' 'KernelStack: 12768 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 12236404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.065 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.066 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:38.067 nr_hugepages=1025 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.067 resv_hugepages=0 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.067 surplus_hugepages=0 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.067 anon_hugepages=0 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41475968 kB' 'MemAvailable: 45050780 kB' 'Buffers: 2704 kB' 'Cached: 14533724 kB' 'SwapCached: 0 kB' 'Active: 11550516 kB' 'Inactive: 3526304 kB' 'Active(anon): 11108756 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543672 kB' 'Mapped: 210128 kB' 'Shmem: 10568364 kB' 'KReclaimable: 200740 kB' 'Slab: 571028 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370288 kB' 'KernelStack: 12800 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 12236792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.067 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.068 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.069 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.331 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20045184 kB' 'MemUsed: 12784700 kB' 'SwapCached: 0 kB' 'Active: 7348996 kB' 'Inactive: 3279424 kB' 'Active(anon): 7057420 kB' 'Inactive(anon): 0 kB' 'Active(file): 291576 kB' 'Inactive(file): 3279424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10340856 kB' 'Mapped: 164752 kB' 'AnonPages: 290812 kB' 'Shmem: 6769856 kB' 'KernelStack: 7992 kB' 'PageTables: 4980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96112 kB' 'Slab: 291336 kB' 'SReclaimable: 96112 kB' 'SUnreclaim: 195224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.332 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 21430012 kB' 'MemUsed: 6281840 kB' 'SwapCached: 0 kB' 'Active: 4202160 kB' 'Inactive: 246880 kB' 'Active(anon): 4051976 kB' 'Inactive(anon): 0 kB' 'Active(file): 150184 kB' 'Inactive(file): 246880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4195632 kB' 'Mapped: 45376 kB' 'AnonPages: 253520 kB' 'Shmem: 3798568 kB' 'KernelStack: 4856 kB' 'PageTables: 3152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104628 kB' 'Slab: 279692 kB' 'SReclaimable: 104628 kB' 'SUnreclaim: 175064 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.333 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:38.334 node0=512 expecting 513 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:38.334 node1=513 expecting 512 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:38.334 00:04:38.334 real 0m1.539s 00:04:38.334 user 0m0.608s 00:04:38.334 sys 0m0.896s 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.334 15:41:07 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:38.334 ************************************ 00:04:38.334 END TEST odd_alloc 00:04:38.334 ************************************ 00:04:38.334 15:41:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:38.334 15:41:07 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:38.334 15:41:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.334 15:41:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.334 15:41:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.334 ************************************ 00:04:38.334 START TEST custom_alloc 00:04:38.334 ************************************ 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.334 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.335 15:41:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.269 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.532 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.532 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.532 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.532 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.532 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.532 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.532 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.532 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.532 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.532 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.532 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.532 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.532 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.532 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.532 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.532 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.532 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 40429036 kB' 'MemAvailable: 44003848 kB' 'Buffers: 2704 kB' 'Cached: 14533832 kB' 'SwapCached: 0 kB' 'Active: 11551304 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109544 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544220 kB' 'Mapped: 210172 kB' 'Shmem: 10568472 kB' 'KReclaimable: 200740 kB' 'Slab: 570648 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 369908 kB' 'KernelStack: 12816 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 12236988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197096 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.533 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 40432000 kB' 'MemAvailable: 44006812 kB' 'Buffers: 2704 kB' 'Cached: 14533832 kB' 'SwapCached: 0 kB' 'Active: 11551320 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109560 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544220 kB' 'Mapped: 210140 kB' 'Shmem: 10568472 kB' 'KReclaimable: 200740 kB' 'Slab: 570620 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 369880 kB' 'KernelStack: 12848 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 12237008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.534 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.535 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 40432208 kB' 'MemAvailable: 44007020 kB' 'Buffers: 2704 kB' 'Cached: 14533852 kB' 'SwapCached: 0 kB' 'Active: 11551144 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109384 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544108 kB' 'Mapped: 210140 kB' 'Shmem: 10568492 kB' 'KReclaimable: 200740 kB' 'Slab: 570696 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 369956 kB' 'KernelStack: 12848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 12237028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.536 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.799 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.799 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.799 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.799 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.800 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:39.801 nr_hugepages=1536 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.801 resv_hugepages=0 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.801 surplus_hugepages=0 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.801 anon_hugepages=0 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 40432924 kB' 'MemAvailable: 44007736 kB' 'Buffers: 2704 kB' 'Cached: 14533876 kB' 'SwapCached: 0 kB' 'Active: 11551172 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109412 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544100 kB' 'Mapped: 210140 kB' 'Shmem: 10568516 kB' 'KReclaimable: 200740 kB' 'Slab: 570696 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 369956 kB' 'KernelStack: 12848 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 12237052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197080 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.801 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.802 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20039028 kB' 'MemUsed: 12790856 kB' 'SwapCached: 0 kB' 'Active: 7349468 kB' 'Inactive: 3279424 kB' 'Active(anon): 7057892 kB' 'Inactive(anon): 0 kB' 'Active(file): 291576 kB' 'Inactive(file): 3279424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10340860 kB' 'Mapped: 164764 kB' 'AnonPages: 291148 kB' 'Shmem: 6769860 kB' 'KernelStack: 7992 kB' 'PageTables: 4968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96112 kB' 'Slab: 291240 kB' 'SReclaimable: 96112 kB' 'SUnreclaim: 195128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.803 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 20394416 kB' 'MemUsed: 7317436 kB' 'SwapCached: 0 kB' 'Active: 4202568 kB' 'Inactive: 246880 kB' 'Active(anon): 4052384 kB' 'Inactive(anon): 0 kB' 'Active(file): 150184 kB' 'Inactive(file): 246880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4195756 kB' 'Mapped: 45376 kB' 'AnonPages: 254264 kB' 'Shmem: 3798692 kB' 'KernelStack: 4872 kB' 'PageTables: 3100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 104628 kB' 'Slab: 279456 kB' 'SReclaimable: 104628 kB' 'SUnreclaim: 174828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.804 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.805 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.806 node0=512 expecting 512 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:39.806 node1=1024 expecting 1024 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:39.806 00:04:39.806 real 0m1.465s 00:04:39.806 user 0m0.584s 00:04:39.806 sys 0m0.842s 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.806 15:41:09 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.806 ************************************ 00:04:39.806 END TEST custom_alloc 00:04:39.806 ************************************ 00:04:39.806 15:41:09 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:39.806 15:41:09 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:39.806 15:41:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.806 15:41:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.806 15:41:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.806 ************************************ 00:04:39.806 START TEST no_shrink_alloc 00:04:39.806 ************************************ 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.806 15:41:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.741 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.741 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.741 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.741 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.741 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.741 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.741 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.002 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.002 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:41.002 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.002 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:41.002 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:41.002 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:41.002 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:41.002 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:41.002 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:41.002 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41458072 kB' 'MemAvailable: 45032884 kB' 'Buffers: 2704 kB' 'Cached: 14533964 kB' 'SwapCached: 0 kB' 'Active: 11551584 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109824 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544584 kB' 'Mapped: 210152 kB' 'Shmem: 10568604 kB' 'KReclaimable: 200740 kB' 'Slab: 570776 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370036 kB' 'KernelStack: 12768 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197064 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.002 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.003 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41458544 kB' 'MemAvailable: 45033356 kB' 'Buffers: 2704 kB' 'Cached: 14533964 kB' 'SwapCached: 0 kB' 'Active: 11552268 kB' 'Inactive: 3526304 kB' 'Active(anon): 11110508 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545272 kB' 'Mapped: 210152 kB' 'Shmem: 10568604 kB' 'KReclaimable: 200740 kB' 'Slab: 570784 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370044 kB' 'KernelStack: 12880 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.004 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41458980 kB' 'MemAvailable: 45033792 kB' 'Buffers: 2704 kB' 'Cached: 14533964 kB' 'SwapCached: 0 kB' 'Active: 11551796 kB' 'Inactive: 3526304 kB' 'Active(anon): 11110036 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544800 kB' 'Mapped: 210152 kB' 'Shmem: 10568604 kB' 'KReclaimable: 200740 kB' 'Slab: 570784 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370044 kB' 'KernelStack: 12880 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.005 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.006 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.007 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.268 nr_hugepages=1024 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.268 resv_hugepages=0 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.268 surplus_hugepages=0 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.268 anon_hugepages=0 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.268 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41459568 kB' 'MemAvailable: 45034380 kB' 'Buffers: 2704 kB' 'Cached: 14534008 kB' 'SwapCached: 0 kB' 'Active: 11551724 kB' 'Inactive: 3526304 kB' 'Active(anon): 11109964 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544760 kB' 'Mapped: 210156 kB' 'Shmem: 10568648 kB' 'KReclaimable: 200740 kB' 'Slab: 570800 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370060 kB' 'KernelStack: 12832 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.269 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18982188 kB' 'MemUsed: 13847696 kB' 'SwapCached: 0 kB' 'Active: 7349928 kB' 'Inactive: 3279424 kB' 'Active(anon): 7058352 kB' 'Inactive(anon): 0 kB' 'Active(file): 291576 kB' 'Inactive(file): 3279424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10340908 kB' 'Mapped: 164780 kB' 'AnonPages: 291784 kB' 'Shmem: 6769908 kB' 'KernelStack: 7992 kB' 'PageTables: 4972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96112 kB' 'Slab: 291372 kB' 'SReclaimable: 96112 kB' 'SUnreclaim: 195260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.270 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.271 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.272 node0=1024 expecting 1024 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.272 15:41:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.656 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.656 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.656 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.656 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.656 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.656 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.656 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.656 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.656 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.656 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.656 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.656 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.656 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.656 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.656 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.656 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.656 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.656 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.656 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41434348 kB' 'MemAvailable: 45009160 kB' 'Buffers: 2704 kB' 'Cached: 14534076 kB' 'SwapCached: 0 kB' 'Active: 11554376 kB' 'Inactive: 3526304 kB' 'Active(anon): 11112616 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 547168 kB' 'Mapped: 210284 kB' 'Shmem: 10568716 kB' 'KReclaimable: 200740 kB' 'Slab: 571032 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370292 kB' 'KernelStack: 13216 kB' 'PageTables: 9568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197208 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.657 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.658 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41436872 kB' 'MemAvailable: 45011684 kB' 'Buffers: 2704 kB' 'Cached: 14534076 kB' 'SwapCached: 0 kB' 'Active: 11552692 kB' 'Inactive: 3526304 kB' 'Active(anon): 11110932 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545432 kB' 'Mapped: 210240 kB' 'Shmem: 10568716 kB' 'KReclaimable: 200740 kB' 'Slab: 571032 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370292 kB' 'KernelStack: 12816 kB' 'PageTables: 8020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197128 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.659 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.660 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.661 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41436720 kB' 'MemAvailable: 45011532 kB' 'Buffers: 2704 kB' 'Cached: 14534096 kB' 'SwapCached: 0 kB' 'Active: 11552244 kB' 'Inactive: 3526304 kB' 'Active(anon): 11110484 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 545060 kB' 'Mapped: 210224 kB' 'Shmem: 10568736 kB' 'KReclaimable: 200740 kB' 'Slab: 570944 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370204 kB' 'KernelStack: 12784 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.662 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.663 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.664 nr_hugepages=1024 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.664 resv_hugepages=0 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.664 surplus_hugepages=0 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.664 anon_hugepages=0 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.664 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 41437524 kB' 'MemAvailable: 45012336 kB' 'Buffers: 2704 kB' 'Cached: 14534140 kB' 'SwapCached: 0 kB' 'Active: 11551852 kB' 'Inactive: 3526304 kB' 'Active(anon): 11110092 kB' 'Inactive(anon): 0 kB' 'Active(file): 441760 kB' 'Inactive(file): 3526304 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544652 kB' 'Mapped: 210164 kB' 'Shmem: 10568780 kB' 'KReclaimable: 200740 kB' 'Slab: 571148 kB' 'SReclaimable: 200740 kB' 'SUnreclaim: 370408 kB' 'KernelStack: 12864 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 12237888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197144 kB' 'VmallocChunk: 0 kB' 'Percpu: 38400 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1810012 kB' 'DirectMap2M: 13838336 kB' 'DirectMap1G: 53477376 kB' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.665 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.666 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18961508 kB' 'MemUsed: 13868376 kB' 'SwapCached: 0 kB' 'Active: 7350532 kB' 'Inactive: 3279424 kB' 'Active(anon): 7058956 kB' 'Inactive(anon): 0 kB' 'Active(file): 291576 kB' 'Inactive(file): 3279424 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10341036 kB' 'Mapped: 164788 kB' 'AnonPages: 292128 kB' 'Shmem: 6770036 kB' 'KernelStack: 7976 kB' 'PageTables: 4924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96112 kB' 'Slab: 291524 kB' 'SReclaimable: 96112 kB' 'SUnreclaim: 195412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.667 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.668 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.669 node0=1024 expecting 1024 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.669 00:04:42.669 real 0m2.868s 00:04:42.669 user 0m1.181s 00:04:42.669 sys 0m1.615s 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.669 15:41:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:42.669 ************************************ 00:04:42.669 END TEST no_shrink_alloc 00:04:42.669 ************************************ 00:04:42.669 15:41:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:42.669 15:41:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:42.669 00:04:42.669 real 0m11.853s 00:04:42.669 user 0m4.470s 00:04:42.669 sys 0m6.258s 00:04:42.669 15:41:12 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.669 15:41:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:42.669 ************************************ 00:04:42.669 END TEST hugepages 00:04:42.669 ************************************ 00:04:42.669 15:41:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:42.669 15:41:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:42.669 15:41:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.669 15:41:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.669 15:41:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:42.669 ************************************ 00:04:42.669 START TEST driver 00:04:42.669 ************************************ 00:04:42.669 15:41:12 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:42.927 * Looking for test storage... 00:04:42.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:42.927 15:41:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:42.927 15:41:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.927 15:41:12 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.460 15:41:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:45.460 15:41:15 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.460 15:41:15 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.460 15:41:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.460 ************************************ 00:04:45.460 START TEST guess_driver 00:04:45.460 ************************************ 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:45.460 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:45.461 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:45.461 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:45.461 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:45.461 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:45.461 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:45.461 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:45.461 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:45.461 Looking for driver=vfio-pci 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.461 15:41:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:46.838 15:41:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.776 15:41:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:47.776 15:41:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:47.776 15:41:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.036 15:41:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:48.036 15:41:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:48.036 15:41:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.036 15:41:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.575 00:04:50.575 real 0m5.132s 00:04:50.575 user 0m1.103s 00:04:50.575 sys 0m2.004s 00:04:50.575 15:41:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.575 15:41:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.575 ************************************ 00:04:50.575 END TEST guess_driver 00:04:50.575 ************************************ 00:04:50.575 15:41:20 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:50.575 00:04:50.575 real 0m7.873s 00:04:50.575 user 0m1.720s 00:04:50.575 sys 0m3.085s 00:04:50.575 15:41:20 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.575 15:41:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:50.575 ************************************ 00:04:50.575 END TEST driver 00:04:50.575 ************************************ 00:04:50.575 15:41:20 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:50.575 15:41:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.575 15:41:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.575 15:41:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.575 15:41:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:50.575 ************************************ 00:04:50.575 START TEST devices 00:04:50.575 ************************************ 00:04:50.575 15:41:20 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:50.834 * Looking for test storage... 00:04:50.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.834 15:41:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:50.834 15:41:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:50.834 15:41:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:50.834 15:41:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.211 15:41:21 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:52.211 15:41:21 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:52.211 15:41:21 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:52.211 No valid GPT data, bailing 00:04:52.211 15:41:21 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.211 15:41:21 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:52.211 15:41:21 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:52.211 15:41:21 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:52.211 15:41:21 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:52.212 15:41:21 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:52.212 15:41:21 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:52.212 15:41:21 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:52.212 15:41:21 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.212 15:41:21 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:04:52.212 15:41:21 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:52.212 15:41:21 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:52.212 15:41:21 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.212 15:41:21 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.212 15:41:21 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.212 15:41:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.472 ************************************ 00:04:52.472 START TEST nvme_mount 00:04:52.472 ************************************ 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.472 15:41:21 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.410 Creating new GPT entries in memory. 00:04:53.410 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.410 other utilities. 00:04:53.410 15:41:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.410 15:41:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.410 15:41:22 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.410 15:41:22 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.410 15:41:22 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:54.347 Creating new GPT entries in memory. 00:04:54.347 The operation has completed successfully. 00:04:54.347 15:41:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.347 15:41:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.347 15:41:23 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4088932 00:04:54.347 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.347 15:41:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:54.347 15:41:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.347 15:41:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:54.347 15:41:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:54.347 15:41:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.606 15:41:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:55.541 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:55.802 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.802 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.063 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:56.063 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:56.063 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:56.063 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.063 15:41:25 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.440 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:57.441 15:41:26 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.441 15:41:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.377 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:58.635 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.894 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.894 00:04:58.894 real 0m6.443s 00:04:58.894 user 0m1.530s 00:04:58.894 sys 0m2.499s 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.894 15:41:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:58.894 ************************************ 00:04:58.894 END TEST nvme_mount 00:04:58.894 ************************************ 00:04:58.894 15:41:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:58.894 15:41:28 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:58.894 15:41:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.894 15:41:28 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.894 15:41:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.894 ************************************ 00:04:58.894 START TEST dm_mount 00:04:58.894 ************************************ 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.894 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.895 15:41:28 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:59.830 Creating new GPT entries in memory. 00:04:59.830 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:59.830 other utilities. 00:04:59.830 15:41:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:59.830 15:41:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:59.830 15:41:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:59.830 15:41:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:59.830 15:41:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:00.764 Creating new GPT entries in memory. 00:05:00.764 The operation has completed successfully. 00:05:00.764 15:41:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:00.764 15:41:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.764 15:41:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.764 15:41:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.764 15:41:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:02.140 The operation has completed successfully. 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4091269 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.140 15:41:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.118 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:03.119 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.376 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.376 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.377 15:41:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.312 15:41:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.570 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.570 00:05:04.570 real 0m5.841s 00:05:04.570 user 0m0.997s 00:05:04.570 sys 0m1.700s 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.570 15:41:34 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:04.570 ************************************ 00:05:04.570 END TEST dm_mount 00:05:04.570 ************************************ 00:05:04.828 15:41:34 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:04.828 15:41:34 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:04.828 15:41:34 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.828 15:41:34 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.828 15:41:34 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.828 15:41:34 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.828 15:41:34 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.828 15:41:34 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.088 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:05.088 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:05.088 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:05.088 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:05.088 15:41:34 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:05.088 15:41:34 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:05.088 15:41:34 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.088 15:41:34 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.088 15:41:34 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.088 15:41:34 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.088 15:41:34 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:05.088 00:05:05.088 real 0m14.322s 00:05:05.088 user 0m3.228s 00:05:05.088 sys 0m5.300s 00:05:05.088 15:41:34 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.088 15:41:34 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:05.088 ************************************ 00:05:05.088 END TEST devices 00:05:05.088 ************************************ 00:05:05.088 15:41:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:05.088 00:05:05.088 real 0m45.436s 00:05:05.088 user 0m12.907s 00:05:05.088 sys 0m20.558s 00:05:05.088 15:41:34 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.088 15:41:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.088 ************************************ 00:05:05.088 END TEST setup.sh 00:05:05.088 ************************************ 00:05:05.088 15:41:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:05.088 15:41:34 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:06.464 Hugepages 00:05:06.464 node hugesize free / total 00:05:06.464 node0 1048576kB 0 / 0 00:05:06.464 node0 2048kB 2048 / 2048 00:05:06.464 node1 1048576kB 0 / 0 00:05:06.464 node1 2048kB 0 / 0 00:05:06.464 00:05:06.464 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.464 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:06.464 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:06.464 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:06.464 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:06.464 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:06.464 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:06.464 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:06.464 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:06.464 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:06.464 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:06.464 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:06.464 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:06.464 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:06.464 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:06.464 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:06.464 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:06.464 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:06.464 15:41:35 -- spdk/autotest.sh@130 -- # uname -s 00:05:06.464 15:41:35 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:06.464 15:41:35 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:06.464 15:41:35 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:07.840 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:07.840 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:07.840 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:07.840 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:07.840 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:07.840 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:07.840 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:07.840 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:07.840 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:08.777 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.777 15:41:38 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:09.715 15:41:39 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:09.715 15:41:39 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:09.715 15:41:39 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.715 15:41:39 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:09.715 15:41:39 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:09.715 15:41:39 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:09.715 15:41:39 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.715 15:41:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.715 15:41:39 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:09.974 15:41:39 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:09.974 15:41:39 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:05:09.974 15:41:39 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:10.911 Waiting for block devices as requested 00:05:11.172 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:11.172 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:11.172 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:11.433 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:11.433 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:11.433 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:11.433 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:11.692 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:11.692 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:05:11.950 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:11.950 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:11.950 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:11.950 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:12.210 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:12.210 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:12.210 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:12.210 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:12.470 15:41:42 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:12.470 15:41:42 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:05:12.470 15:41:42 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:12.470 15:41:42 -- common/autotest_common.sh@1502 -- # grep 0000:0b:00.0/nvme/nvme 00:05:12.470 15:41:42 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:12.470 15:41:42 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:05:12.470 15:41:42 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:05:12.470 15:41:42 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:12.470 15:41:42 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:12.471 15:41:42 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:12.471 15:41:42 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:12.471 15:41:42 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:12.471 15:41:42 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:12.471 15:41:42 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:12.471 15:41:42 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:12.471 15:41:42 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:12.471 15:41:42 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:12.471 15:41:42 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:12.471 15:41:42 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:12.471 15:41:42 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:12.471 15:41:42 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:12.471 15:41:42 -- common/autotest_common.sh@1557 -- # continue 00:05:12.471 15:41:42 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:12.471 15:41:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.471 15:41:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.471 15:41:42 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:12.471 15:41:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:12.471 15:41:42 -- common/autotest_common.sh@10 -- # set +x 00:05:12.471 15:41:42 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:13.848 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:13.848 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:13.848 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:13.848 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:13.848 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:13.848 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:13.848 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:13.848 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:13.848 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:14.786 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:05:15.044 15:41:44 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:15.045 15:41:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:15.045 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.045 15:41:44 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:15.045 15:41:44 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:15.045 15:41:44 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:15.045 15:41:44 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:15.045 15:41:44 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:15.045 15:41:44 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:15.045 15:41:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:15.045 15:41:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:15.045 15:41:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:15.045 15:41:44 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:15.045 15:41:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:15.045 15:41:44 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:15.045 15:41:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:05:15.045 15:41:44 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:15.045 15:41:44 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:05:15.045 15:41:44 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:15.045 15:41:44 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:15.045 15:41:44 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:15.045 15:41:44 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:0b:00.0 00:05:15.045 15:41:44 -- common/autotest_common.sh@1592 -- # [[ -z 0000:0b:00.0 ]] 00:05:15.045 15:41:44 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4096583 00:05:15.045 15:41:44 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.045 15:41:44 -- common/autotest_common.sh@1598 -- # waitforlisten 4096583 00:05:15.045 15:41:44 -- common/autotest_common.sh@829 -- # '[' -z 4096583 ']' 00:05:15.045 15:41:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.045 15:41:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.045 15:41:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.045 15:41:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.045 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:05:15.045 [2024-07-12 15:41:44.678010] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:15.045 [2024-07-12 15:41:44.678085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4096583 ] 00:05:15.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.045 [2024-07-12 15:41:44.736967] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.307 [2024-07-12 15:41:44.845405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.567 15:41:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.567 15:41:45 -- common/autotest_common.sh@862 -- # return 0 00:05:15.567 15:41:45 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:15.567 15:41:45 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:15.567 15:41:45 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:05:18.894 nvme0n1 00:05:18.894 15:41:48 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:18.894 [2024-07-12 15:41:48.388151] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:18.894 [2024-07-12 15:41:48.388192] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:18.894 request: 00:05:18.894 { 00:05:18.894 "nvme_ctrlr_name": "nvme0", 00:05:18.894 "password": "test", 00:05:18.894 "method": "bdev_nvme_opal_revert", 00:05:18.894 "req_id": 1 00:05:18.894 } 00:05:18.894 Got JSON-RPC error response 00:05:18.894 response: 00:05:18.894 { 00:05:18.894 "code": -32603, 00:05:18.894 "message": "Internal error" 00:05:18.894 } 00:05:18.894 15:41:48 -- common/autotest_common.sh@1604 -- # true 00:05:18.894 15:41:48 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:18.894 15:41:48 -- common/autotest_common.sh@1608 -- # killprocess 4096583 00:05:18.894 15:41:48 -- common/autotest_common.sh@948 -- # '[' -z 4096583 ']' 00:05:18.894 15:41:48 -- common/autotest_common.sh@952 -- # kill -0 4096583 00:05:18.894 15:41:48 -- common/autotest_common.sh@953 -- # uname 00:05:18.894 15:41:48 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:18.894 15:41:48 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096583 00:05:18.894 15:41:48 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:18.894 15:41:48 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:18.894 15:41:48 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096583' 00:05:18.894 killing process with pid 4096583 00:05:18.894 15:41:48 -- common/autotest_common.sh@967 -- # kill 4096583 00:05:18.894 15:41:48 -- common/autotest_common.sh@972 -- # wait 4096583 00:05:20.791 15:41:50 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:20.791 15:41:50 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:20.791 15:41:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:20.791 15:41:50 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:20.791 15:41:50 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:20.791 15:41:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.791 15:41:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.791 15:41:50 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:20.791 15:41:50 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:20.791 15:41:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.791 15:41:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.791 15:41:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.791 ************************************ 00:05:20.791 START TEST env 00:05:20.791 ************************************ 00:05:20.791 15:41:50 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:20.791 * Looking for test storage... 00:05:20.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:20.791 15:41:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.791 15:41:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.791 15:41:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.791 15:41:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.791 ************************************ 00:05:20.791 START TEST env_memory 00:05:20.791 ************************************ 00:05:20.791 15:41:50 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:20.791 00:05:20.791 00:05:20.791 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.791 http://cunit.sourceforge.net/ 00:05:20.791 00:05:20.791 00:05:20.791 Suite: memory 00:05:20.791 Test: alloc and free memory map ...[2024-07-12 15:41:50.317957] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:20.791 passed 00:05:20.791 Test: mem map translation ...[2024-07-12 15:41:50.338059] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:20.791 [2024-07-12 15:41:50.338082] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:20.791 [2024-07-12 15:41:50.338133] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:20.791 [2024-07-12 15:41:50.338145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:20.791 passed 00:05:20.791 Test: mem map registration ...[2024-07-12 15:41:50.379219] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:20.791 [2024-07-12 15:41:50.379242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:20.791 passed 00:05:20.792 Test: mem map adjacent registrations ...passed 00:05:20.792 00:05:20.792 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.792 suites 1 1 n/a 0 0 00:05:20.792 tests 4 4 4 0 0 00:05:20.792 asserts 152 152 152 0 n/a 00:05:20.792 00:05:20.792 Elapsed time = 0.141 seconds 00:05:20.792 00:05:20.792 real 0m0.148s 00:05:20.792 user 0m0.140s 00:05:20.792 sys 0m0.007s 00:05:20.792 15:41:50 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.792 15:41:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:20.792 ************************************ 00:05:20.792 END TEST env_memory 00:05:20.792 ************************************ 00:05:20.792 15:41:50 env -- common/autotest_common.sh@1142 -- # return 0 00:05:20.792 15:41:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.792 15:41:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.792 15:41:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.792 15:41:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.792 ************************************ 00:05:20.792 START TEST env_vtophys 00:05:20.792 ************************************ 00:05:20.792 15:41:50 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:20.792 EAL: lib.eal log level changed from notice to debug 00:05:20.792 EAL: Detected lcore 0 as core 0 on socket 0 00:05:20.792 EAL: Detected lcore 1 as core 1 on socket 0 00:05:20.792 EAL: Detected lcore 2 as core 2 on socket 0 00:05:20.792 EAL: Detected lcore 3 as core 3 on socket 0 00:05:20.792 EAL: Detected lcore 4 as core 4 on socket 0 00:05:20.792 EAL: Detected lcore 5 as core 5 on socket 0 00:05:20.792 EAL: Detected lcore 6 as core 8 on socket 0 00:05:20.792 EAL: Detected lcore 7 as core 9 on socket 0 00:05:20.792 EAL: Detected lcore 8 as core 10 on socket 0 00:05:20.792 EAL: Detected lcore 9 as core 11 on socket 0 00:05:20.792 EAL: Detected lcore 10 as core 12 on socket 0 00:05:20.792 EAL: Detected lcore 11 as core 13 on socket 0 00:05:20.792 EAL: Detected lcore 12 as core 0 on socket 1 00:05:20.792 EAL: Detected lcore 13 as core 1 on socket 1 00:05:20.792 EAL: Detected lcore 14 as core 2 on socket 1 00:05:20.792 EAL: Detected lcore 15 as core 3 on socket 1 00:05:20.792 EAL: Detected lcore 16 as core 4 on socket 1 00:05:20.792 EAL: Detected lcore 17 as core 5 on socket 1 00:05:20.792 EAL: Detected lcore 18 as core 8 on socket 1 00:05:20.792 EAL: Detected lcore 19 as core 9 on socket 1 00:05:20.792 EAL: Detected lcore 20 as core 10 on socket 1 00:05:20.792 EAL: Detected lcore 21 as core 11 on socket 1 00:05:20.792 EAL: Detected lcore 22 as core 12 on socket 1 00:05:20.792 EAL: Detected lcore 23 as core 13 on socket 1 00:05:20.792 EAL: Detected lcore 24 as core 0 on socket 0 00:05:20.792 EAL: Detected lcore 25 as core 1 on socket 0 00:05:20.792 EAL: Detected lcore 26 as core 2 on socket 0 00:05:20.792 EAL: Detected lcore 27 as core 3 on socket 0 00:05:20.792 EAL: Detected lcore 28 as core 4 on socket 0 00:05:20.792 EAL: Detected lcore 29 as core 5 on socket 0 00:05:20.792 EAL: Detected lcore 30 as core 8 on socket 0 00:05:20.792 EAL: Detected lcore 31 as core 9 on socket 0 00:05:20.792 EAL: Detected lcore 32 as core 10 on socket 0 00:05:20.792 EAL: Detected lcore 33 as core 11 on socket 0 00:05:20.792 EAL: Detected lcore 34 as core 12 on socket 0 00:05:20.792 EAL: Detected lcore 35 as core 13 on socket 0 00:05:20.792 EAL: Detected lcore 36 as core 0 on socket 1 00:05:20.792 EAL: Detected lcore 37 as core 1 on socket 1 00:05:20.792 EAL: Detected lcore 38 as core 2 on socket 1 00:05:20.792 EAL: Detected lcore 39 as core 3 on socket 1 00:05:20.792 EAL: Detected lcore 40 as core 4 on socket 1 00:05:20.792 EAL: Detected lcore 41 as core 5 on socket 1 00:05:20.792 EAL: Detected lcore 42 as core 8 on socket 1 00:05:20.792 EAL: Detected lcore 43 as core 9 on socket 1 00:05:20.792 EAL: Detected lcore 44 as core 10 on socket 1 00:05:20.792 EAL: Detected lcore 45 as core 11 on socket 1 00:05:20.792 EAL: Detected lcore 46 as core 12 on socket 1 00:05:20.792 EAL: Detected lcore 47 as core 13 on socket 1 00:05:20.792 EAL: Maximum logical cores by configuration: 128 00:05:20.792 EAL: Detected CPU lcores: 48 00:05:20.792 EAL: Detected NUMA nodes: 2 00:05:20.792 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:20.792 EAL: Detected shared linkage of DPDK 00:05:20.792 EAL: No shared files mode enabled, IPC will be disabled 00:05:21.050 EAL: Bus pci wants IOVA as 'DC' 00:05:21.050 EAL: Buses did not request a specific IOVA mode. 00:05:21.050 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:21.050 EAL: Selected IOVA mode 'VA' 00:05:21.050 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.050 EAL: Probing VFIO support... 00:05:21.050 EAL: IOMMU type 1 (Type 1) is supported 00:05:21.050 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:21.050 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:21.050 EAL: VFIO support initialized 00:05:21.050 EAL: Ask a virtual area of 0x2e000 bytes 00:05:21.050 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:21.050 EAL: Setting up physically contiguous memory... 00:05:21.050 EAL: Setting maximum number of open files to 524288 00:05:21.050 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:21.050 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:21.050 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:21.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.050 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:21.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.050 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:21.050 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:21.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.050 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:21.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.050 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:21.050 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:21.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.050 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:21.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.050 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:21.050 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:21.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.050 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:21.050 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.050 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:21.050 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:21.050 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:21.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.050 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:21.050 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.050 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:21.050 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:21.050 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.050 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:21.050 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.050 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.050 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:21.051 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:21.051 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.051 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:21.051 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.051 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.051 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:21.051 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:21.051 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.051 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:21.051 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.051 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.051 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:21.051 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:21.051 EAL: Hugepages will be freed exactly as allocated. 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: TSC frequency is ~2700000 KHz 00:05:21.051 EAL: Main lcore 0 is ready (tid=7fb7d26cfa00;cpuset=[0]) 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 0 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 2MB 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:21.051 EAL: Mem event callback 'spdk:(nil)' registered 00:05:21.051 00:05:21.051 00:05:21.051 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.051 http://cunit.sourceforge.net/ 00:05:21.051 00:05:21.051 00:05:21.051 Suite: components_suite 00:05:21.051 Test: vtophys_malloc_test ...passed 00:05:21.051 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 4MB 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was shrunk by 4MB 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 6MB 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was shrunk by 6MB 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 10MB 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was shrunk by 10MB 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 18MB 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was shrunk by 18MB 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 34MB 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was shrunk by 34MB 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 66MB 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was shrunk by 66MB 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 130MB 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was shrunk by 130MB 00:05:21.051 EAL: Trying to obtain current memory policy. 00:05:21.051 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.051 EAL: Restoring previous memory policy: 4 00:05:21.051 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.051 EAL: request: mp_malloc_sync 00:05:21.051 EAL: No shared files mode enabled, IPC is disabled 00:05:21.051 EAL: Heap on socket 0 was expanded by 258MB 00:05:21.310 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.310 EAL: request: mp_malloc_sync 00:05:21.310 EAL: No shared files mode enabled, IPC is disabled 00:05:21.310 EAL: Heap on socket 0 was shrunk by 258MB 00:05:21.310 EAL: Trying to obtain current memory policy. 00:05:21.310 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.310 EAL: Restoring previous memory policy: 4 00:05:21.310 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.310 EAL: request: mp_malloc_sync 00:05:21.310 EAL: No shared files mode enabled, IPC is disabled 00:05:21.310 EAL: Heap on socket 0 was expanded by 514MB 00:05:21.568 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.568 EAL: request: mp_malloc_sync 00:05:21.568 EAL: No shared files mode enabled, IPC is disabled 00:05:21.568 EAL: Heap on socket 0 was shrunk by 514MB 00:05:21.568 EAL: Trying to obtain current memory policy. 00:05:21.568 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.825 EAL: Restoring previous memory policy: 4 00:05:21.825 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.825 EAL: request: mp_malloc_sync 00:05:21.825 EAL: No shared files mode enabled, IPC is disabled 00:05:21.825 EAL: Heap on socket 0 was expanded by 1026MB 00:05:22.083 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.341 EAL: request: mp_malloc_sync 00:05:22.341 EAL: No shared files mode enabled, IPC is disabled 00:05:22.342 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.342 passed 00:05:22.342 00:05:22.342 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.342 suites 1 1 n/a 0 0 00:05:22.342 tests 2 2 2 0 0 00:05:22.342 asserts 497 497 497 0 n/a 00:05:22.342 00:05:22.342 Elapsed time = 1.350 seconds 00:05:22.342 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.342 EAL: request: mp_malloc_sync 00:05:22.342 EAL: No shared files mode enabled, IPC is disabled 00:05:22.342 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.342 EAL: No shared files mode enabled, IPC is disabled 00:05:22.342 EAL: No shared files mode enabled, IPC is disabled 00:05:22.342 EAL: No shared files mode enabled, IPC is disabled 00:05:22.342 00:05:22.342 real 0m1.463s 00:05:22.342 user 0m0.858s 00:05:22.342 sys 0m0.569s 00:05:22.342 15:41:51 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.342 15:41:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:22.342 ************************************ 00:05:22.342 END TEST env_vtophys 00:05:22.342 ************************************ 00:05:22.342 15:41:51 env -- common/autotest_common.sh@1142 -- # return 0 00:05:22.342 15:41:51 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:22.342 15:41:51 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.342 15:41:51 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.342 15:41:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.342 ************************************ 00:05:22.342 START TEST env_pci 00:05:22.342 ************************************ 00:05:22.342 15:41:51 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:22.342 00:05:22.342 00:05:22.342 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.342 http://cunit.sourceforge.net/ 00:05:22.342 00:05:22.342 00:05:22.342 Suite: pci 00:05:22.342 Test: pci_hook ...[2024-07-12 15:41:52.004136] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4097479 has claimed it 00:05:22.342 EAL: Cannot find device (10000:00:01.0) 00:05:22.342 EAL: Failed to attach device on primary process 00:05:22.342 passed 00:05:22.342 00:05:22.342 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.342 suites 1 1 n/a 0 0 00:05:22.342 tests 1 1 1 0 0 00:05:22.342 asserts 25 25 25 0 n/a 00:05:22.342 00:05:22.342 Elapsed time = 0.022 seconds 00:05:22.342 00:05:22.342 real 0m0.035s 00:05:22.342 user 0m0.012s 00:05:22.342 sys 0m0.023s 00:05:22.342 15:41:52 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.342 15:41:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:22.342 ************************************ 00:05:22.342 END TEST env_pci 00:05:22.342 ************************************ 00:05:22.342 15:41:52 env -- common/autotest_common.sh@1142 -- # return 0 00:05:22.342 15:41:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.342 15:41:52 env -- env/env.sh@15 -- # uname 00:05:22.342 15:41:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.342 15:41:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.342 15:41:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.342 15:41:52 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:22.342 15:41:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.342 15:41:52 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.601 ************************************ 00:05:22.601 START TEST env_dpdk_post_init 00:05:22.601 ************************************ 00:05:22.601 15:41:52 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.601 EAL: Detected CPU lcores: 48 00:05:22.601 EAL: Detected NUMA nodes: 2 00:05:22.601 EAL: Detected shared linkage of DPDK 00:05:22.601 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.601 EAL: Selected IOVA mode 'VA' 00:05:22.601 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.601 EAL: VFIO support initialized 00:05:22.601 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.601 EAL: Using IOMMU type 1 (Type 1) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:22.601 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:23.536 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:23.536 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:26.808 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:05:26.808 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:05:26.808 Starting DPDK initialization... 00:05:26.808 Starting SPDK post initialization... 00:05:26.808 SPDK NVMe probe 00:05:26.808 Attaching to 0000:0b:00.0 00:05:26.808 Attached to 0000:0b:00.0 00:05:26.809 Cleaning up... 00:05:26.809 00:05:26.809 real 0m4.323s 00:05:26.809 user 0m3.231s 00:05:26.809 sys 0m0.153s 00:05:26.809 15:41:56 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.809 15:41:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.809 ************************************ 00:05:26.809 END TEST env_dpdk_post_init 00:05:26.809 ************************************ 00:05:26.809 15:41:56 env -- common/autotest_common.sh@1142 -- # return 0 00:05:26.809 15:41:56 env -- env/env.sh@26 -- # uname 00:05:26.809 15:41:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:26.809 15:41:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.809 15:41:56 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.809 15:41:56 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.809 15:41:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.809 ************************************ 00:05:26.809 START TEST env_mem_callbacks 00:05:26.809 ************************************ 00:05:26.809 15:41:56 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:26.809 EAL: Detected CPU lcores: 48 00:05:26.809 EAL: Detected NUMA nodes: 2 00:05:26.809 EAL: Detected shared linkage of DPDK 00:05:26.809 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.809 EAL: Selected IOVA mode 'VA' 00:05:26.809 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.809 EAL: VFIO support initialized 00:05:26.809 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:26.809 00:05:26.809 00:05:26.809 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.809 http://cunit.sourceforge.net/ 00:05:26.809 00:05:26.809 00:05:26.809 Suite: memory 00:05:26.809 Test: test ... 00:05:26.809 register 0x200000200000 2097152 00:05:26.809 malloc 3145728 00:05:26.809 register 0x200000400000 4194304 00:05:26.809 buf 0x200000500000 len 3145728 PASSED 00:05:26.809 malloc 64 00:05:26.809 buf 0x2000004fff40 len 64 PASSED 00:05:26.809 malloc 4194304 00:05:26.809 register 0x200000800000 6291456 00:05:26.809 buf 0x200000a00000 len 4194304 PASSED 00:05:26.809 free 0x200000500000 3145728 00:05:26.809 free 0x2000004fff40 64 00:05:26.809 unregister 0x200000400000 4194304 PASSED 00:05:26.809 free 0x200000a00000 4194304 00:05:26.809 unregister 0x200000800000 6291456 PASSED 00:05:26.809 malloc 8388608 00:05:26.809 register 0x200000400000 10485760 00:05:26.809 buf 0x200000600000 len 8388608 PASSED 00:05:26.809 free 0x200000600000 8388608 00:05:26.809 unregister 0x200000400000 10485760 PASSED 00:05:26.809 passed 00:05:26.809 00:05:26.809 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.809 suites 1 1 n/a 0 0 00:05:26.809 tests 1 1 1 0 0 00:05:26.809 asserts 15 15 15 0 n/a 00:05:26.809 00:05:26.809 Elapsed time = 0.005 seconds 00:05:26.809 00:05:26.809 real 0m0.046s 00:05:26.809 user 0m0.011s 00:05:26.809 sys 0m0.035s 00:05:26.809 15:41:56 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.809 15:41:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:26.809 ************************************ 00:05:26.809 END TEST env_mem_callbacks 00:05:26.809 ************************************ 00:05:26.809 15:41:56 env -- common/autotest_common.sh@1142 -- # return 0 00:05:26.809 00:05:26.809 real 0m6.315s 00:05:26.809 user 0m4.373s 00:05:26.809 sys 0m0.984s 00:05:26.809 15:41:56 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.809 15:41:56 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.809 ************************************ 00:05:26.809 END TEST env 00:05:26.809 ************************************ 00:05:27.066 15:41:56 -- common/autotest_common.sh@1142 -- # return 0 00:05:27.066 15:41:56 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:27.066 15:41:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.066 15:41:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.066 15:41:56 -- common/autotest_common.sh@10 -- # set +x 00:05:27.066 ************************************ 00:05:27.066 START TEST rpc 00:05:27.066 ************************************ 00:05:27.066 15:41:56 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:27.066 * Looking for test storage... 00:05:27.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.066 15:41:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4098136 00:05:27.066 15:41:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:27.066 15:41:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.066 15:41:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4098136 00:05:27.066 15:41:56 rpc -- common/autotest_common.sh@829 -- # '[' -z 4098136 ']' 00:05:27.066 15:41:56 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.066 15:41:56 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.066 15:41:56 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.066 15:41:56 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.066 15:41:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.066 [2024-07-12 15:41:56.678180] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:27.066 [2024-07-12 15:41:56.678257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098136 ] 00:05:27.066 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.066 [2024-07-12 15:41:56.734266] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.324 [2024-07-12 15:41:56.846653] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:27.324 [2024-07-12 15:41:56.846701] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4098136' to capture a snapshot of events at runtime. 00:05:27.324 [2024-07-12 15:41:56.846715] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:27.324 [2024-07-12 15:41:56.846725] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:27.324 [2024-07-12 15:41:56.846735] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4098136 for offline analysis/debug. 00:05:27.324 [2024-07-12 15:41:56.846761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.582 15:41:57 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.582 15:41:57 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.582 15:41:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.582 15:41:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:27.582 15:41:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:27.582 15:41:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:27.582 15:41:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.582 15:41:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.582 15:41:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 ************************************ 00:05:27.582 START TEST rpc_integrity 00:05:27.582 ************************************ 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.582 { 00:05:27.582 "name": "Malloc0", 00:05:27.582 "aliases": [ 00:05:27.582 "3047f2a1-3a7d-4f92-9e35-6df16d6544b8" 00:05:27.582 ], 00:05:27.582 "product_name": "Malloc disk", 00:05:27.582 "block_size": 512, 00:05:27.582 "num_blocks": 16384, 00:05:27.582 "uuid": "3047f2a1-3a7d-4f92-9e35-6df16d6544b8", 00:05:27.582 "assigned_rate_limits": { 00:05:27.582 "rw_ios_per_sec": 0, 00:05:27.582 "rw_mbytes_per_sec": 0, 00:05:27.582 "r_mbytes_per_sec": 0, 00:05:27.582 "w_mbytes_per_sec": 0 00:05:27.582 }, 00:05:27.582 "claimed": false, 00:05:27.582 "zoned": false, 00:05:27.582 "supported_io_types": { 00:05:27.582 "read": true, 00:05:27.582 "write": true, 00:05:27.582 "unmap": true, 00:05:27.582 "flush": true, 00:05:27.582 "reset": true, 00:05:27.582 "nvme_admin": false, 00:05:27.582 "nvme_io": false, 00:05:27.582 "nvme_io_md": false, 00:05:27.582 "write_zeroes": true, 00:05:27.582 "zcopy": true, 00:05:27.582 "get_zone_info": false, 00:05:27.582 "zone_management": false, 00:05:27.582 "zone_append": false, 00:05:27.582 "compare": false, 00:05:27.582 "compare_and_write": false, 00:05:27.582 "abort": true, 00:05:27.582 "seek_hole": false, 00:05:27.582 "seek_data": false, 00:05:27.582 "copy": true, 00:05:27.582 "nvme_iov_md": false 00:05:27.582 }, 00:05:27.582 "memory_domains": [ 00:05:27.582 { 00:05:27.582 "dma_device_id": "system", 00:05:27.582 "dma_device_type": 1 00:05:27.582 }, 00:05:27.582 { 00:05:27.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.582 "dma_device_type": 2 00:05:27.582 } 00:05:27.582 ], 00:05:27.582 "driver_specific": {} 00:05:27.582 } 00:05:27.582 ]' 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.582 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.582 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.582 [2024-07-12 15:41:57.214938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:27.582 [2024-07-12 15:41:57.214975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.582 [2024-07-12 15:41:57.215001] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xadeeb0 00:05:27.582 [2024-07-12 15:41:57.215013] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.582 [2024-07-12 15:41:57.216532] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.582 [2024-07-12 15:41:57.216559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.582 Passthru0 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.583 { 00:05:27.583 "name": "Malloc0", 00:05:27.583 "aliases": [ 00:05:27.583 "3047f2a1-3a7d-4f92-9e35-6df16d6544b8" 00:05:27.583 ], 00:05:27.583 "product_name": "Malloc disk", 00:05:27.583 "block_size": 512, 00:05:27.583 "num_blocks": 16384, 00:05:27.583 "uuid": "3047f2a1-3a7d-4f92-9e35-6df16d6544b8", 00:05:27.583 "assigned_rate_limits": { 00:05:27.583 "rw_ios_per_sec": 0, 00:05:27.583 "rw_mbytes_per_sec": 0, 00:05:27.583 "r_mbytes_per_sec": 0, 00:05:27.583 "w_mbytes_per_sec": 0 00:05:27.583 }, 00:05:27.583 "claimed": true, 00:05:27.583 "claim_type": "exclusive_write", 00:05:27.583 "zoned": false, 00:05:27.583 "supported_io_types": { 00:05:27.583 "read": true, 00:05:27.583 "write": true, 00:05:27.583 "unmap": true, 00:05:27.583 "flush": true, 00:05:27.583 "reset": true, 00:05:27.583 "nvme_admin": false, 00:05:27.583 "nvme_io": false, 00:05:27.583 "nvme_io_md": false, 00:05:27.583 "write_zeroes": true, 00:05:27.583 "zcopy": true, 00:05:27.583 "get_zone_info": false, 00:05:27.583 "zone_management": false, 00:05:27.583 "zone_append": false, 00:05:27.583 "compare": false, 00:05:27.583 "compare_and_write": false, 00:05:27.583 "abort": true, 00:05:27.583 "seek_hole": false, 00:05:27.583 "seek_data": false, 00:05:27.583 "copy": true, 00:05:27.583 "nvme_iov_md": false 00:05:27.583 }, 00:05:27.583 "memory_domains": [ 00:05:27.583 { 00:05:27.583 "dma_device_id": "system", 00:05:27.583 "dma_device_type": 1 00:05:27.583 }, 00:05:27.583 { 00:05:27.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.583 "dma_device_type": 2 00:05:27.583 } 00:05:27.583 ], 00:05:27.583 "driver_specific": {} 00:05:27.583 }, 00:05:27.583 { 00:05:27.583 "name": "Passthru0", 00:05:27.583 "aliases": [ 00:05:27.583 "b86fc24d-a22c-53f3-b859-cce4a5ca81ba" 00:05:27.583 ], 00:05:27.583 "product_name": "passthru", 00:05:27.583 "block_size": 512, 00:05:27.583 "num_blocks": 16384, 00:05:27.583 "uuid": "b86fc24d-a22c-53f3-b859-cce4a5ca81ba", 00:05:27.583 "assigned_rate_limits": { 00:05:27.583 "rw_ios_per_sec": 0, 00:05:27.583 "rw_mbytes_per_sec": 0, 00:05:27.583 "r_mbytes_per_sec": 0, 00:05:27.583 "w_mbytes_per_sec": 0 00:05:27.583 }, 00:05:27.583 "claimed": false, 00:05:27.583 "zoned": false, 00:05:27.583 "supported_io_types": { 00:05:27.583 "read": true, 00:05:27.583 "write": true, 00:05:27.583 "unmap": true, 00:05:27.583 "flush": true, 00:05:27.583 "reset": true, 00:05:27.583 "nvme_admin": false, 00:05:27.583 "nvme_io": false, 00:05:27.583 "nvme_io_md": false, 00:05:27.583 "write_zeroes": true, 00:05:27.583 "zcopy": true, 00:05:27.583 "get_zone_info": false, 00:05:27.583 "zone_management": false, 00:05:27.583 "zone_append": false, 00:05:27.583 "compare": false, 00:05:27.583 "compare_and_write": false, 00:05:27.583 "abort": true, 00:05:27.583 "seek_hole": false, 00:05:27.583 "seek_data": false, 00:05:27.583 "copy": true, 00:05:27.583 "nvme_iov_md": false 00:05:27.583 }, 00:05:27.583 "memory_domains": [ 00:05:27.583 { 00:05:27.583 "dma_device_id": "system", 00:05:27.583 "dma_device_type": 1 00:05:27.583 }, 00:05:27.583 { 00:05:27.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.583 "dma_device_type": 2 00:05:27.583 } 00:05:27.583 ], 00:05:27.583 "driver_specific": { 00:05:27.583 "passthru": { 00:05:27.583 "name": "Passthru0", 00:05:27.583 "base_bdev_name": "Malloc0" 00:05:27.583 } 00:05:27.583 } 00:05:27.583 } 00:05:27.583 ]' 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.583 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.583 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.841 15:41:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.841 00:05:27.841 real 0m0.210s 00:05:27.841 user 0m0.130s 00:05:27.841 sys 0m0.024s 00:05:27.841 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.841 15:41:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 ************************************ 00:05:27.841 END TEST rpc_integrity 00:05:27.841 ************************************ 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.841 15:41:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 ************************************ 00:05:27.841 START TEST rpc_plugins 00:05:27.841 ************************************ 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:27.841 { 00:05:27.841 "name": "Malloc1", 00:05:27.841 "aliases": [ 00:05:27.841 "aeef3b95-057a-4df1-91da-71aa969345ba" 00:05:27.841 ], 00:05:27.841 "product_name": "Malloc disk", 00:05:27.841 "block_size": 4096, 00:05:27.841 "num_blocks": 256, 00:05:27.841 "uuid": "aeef3b95-057a-4df1-91da-71aa969345ba", 00:05:27.841 "assigned_rate_limits": { 00:05:27.841 "rw_ios_per_sec": 0, 00:05:27.841 "rw_mbytes_per_sec": 0, 00:05:27.841 "r_mbytes_per_sec": 0, 00:05:27.841 "w_mbytes_per_sec": 0 00:05:27.841 }, 00:05:27.841 "claimed": false, 00:05:27.841 "zoned": false, 00:05:27.841 "supported_io_types": { 00:05:27.841 "read": true, 00:05:27.841 "write": true, 00:05:27.841 "unmap": true, 00:05:27.841 "flush": true, 00:05:27.841 "reset": true, 00:05:27.841 "nvme_admin": false, 00:05:27.841 "nvme_io": false, 00:05:27.841 "nvme_io_md": false, 00:05:27.841 "write_zeroes": true, 00:05:27.841 "zcopy": true, 00:05:27.841 "get_zone_info": false, 00:05:27.841 "zone_management": false, 00:05:27.841 "zone_append": false, 00:05:27.841 "compare": false, 00:05:27.841 "compare_and_write": false, 00:05:27.841 "abort": true, 00:05:27.841 "seek_hole": false, 00:05:27.841 "seek_data": false, 00:05:27.841 "copy": true, 00:05:27.841 "nvme_iov_md": false 00:05:27.841 }, 00:05:27.841 "memory_domains": [ 00:05:27.841 { 00:05:27.841 "dma_device_id": "system", 00:05:27.841 "dma_device_type": 1 00:05:27.841 }, 00:05:27.841 { 00:05:27.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.841 "dma_device_type": 2 00:05:27.841 } 00:05:27.841 ], 00:05:27.841 "driver_specific": {} 00:05:27.841 } 00:05:27.841 ]' 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:27.841 15:41:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:27.841 00:05:27.841 real 0m0.105s 00:05:27.841 user 0m0.065s 00:05:27.841 sys 0m0.013s 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.841 15:41:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 ************************************ 00:05:27.841 END TEST rpc_plugins 00:05:27.841 ************************************ 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.841 15:41:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.841 15:41:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 ************************************ 00:05:27.841 START TEST rpc_trace_cmd_test 00:05:27.841 ************************************ 00:05:27.841 15:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:27.841 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:27.841 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:27.841 15:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.841 15:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.841 15:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.841 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:27.841 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4098136", 00:05:27.841 "tpoint_group_mask": "0x8", 00:05:27.841 "iscsi_conn": { 00:05:27.841 "mask": "0x2", 00:05:27.841 "tpoint_mask": "0x0" 00:05:27.841 }, 00:05:27.841 "scsi": { 00:05:27.841 "mask": "0x4", 00:05:27.841 "tpoint_mask": "0x0" 00:05:27.841 }, 00:05:27.841 "bdev": { 00:05:27.841 "mask": "0x8", 00:05:27.841 "tpoint_mask": "0xffffffffffffffff" 00:05:27.841 }, 00:05:27.841 "nvmf_rdma": { 00:05:27.841 "mask": "0x10", 00:05:27.841 "tpoint_mask": "0x0" 00:05:27.841 }, 00:05:27.841 "nvmf_tcp": { 00:05:27.841 "mask": "0x20", 00:05:27.841 "tpoint_mask": "0x0" 00:05:27.841 }, 00:05:27.841 "ftl": { 00:05:27.841 "mask": "0x40", 00:05:27.841 "tpoint_mask": "0x0" 00:05:27.841 }, 00:05:27.841 "blobfs": { 00:05:27.841 "mask": "0x80", 00:05:27.841 "tpoint_mask": "0x0" 00:05:27.841 }, 00:05:27.841 "dsa": { 00:05:27.842 "mask": "0x200", 00:05:27.842 "tpoint_mask": "0x0" 00:05:27.842 }, 00:05:27.842 "thread": { 00:05:27.842 "mask": "0x400", 00:05:27.842 "tpoint_mask": "0x0" 00:05:27.842 }, 00:05:27.842 "nvme_pcie": { 00:05:27.842 "mask": "0x800", 00:05:27.842 "tpoint_mask": "0x0" 00:05:27.842 }, 00:05:27.842 "iaa": { 00:05:27.842 "mask": "0x1000", 00:05:27.842 "tpoint_mask": "0x0" 00:05:27.842 }, 00:05:27.842 "nvme_tcp": { 00:05:27.842 "mask": "0x2000", 00:05:27.842 "tpoint_mask": "0x0" 00:05:27.842 }, 00:05:27.842 "bdev_nvme": { 00:05:27.842 "mask": "0x4000", 00:05:27.842 "tpoint_mask": "0x0" 00:05:27.842 }, 00:05:27.842 "sock": { 00:05:27.842 "mask": "0x8000", 00:05:27.842 "tpoint_mask": "0x0" 00:05:27.842 } 00:05:27.842 }' 00:05:27.842 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:27.842 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:27.842 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:28.099 00:05:28.099 real 0m0.192s 00:05:28.099 user 0m0.168s 00:05:28.099 sys 0m0.018s 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.099 15:41:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 ************************************ 00:05:28.099 END TEST rpc_trace_cmd_test 00:05:28.099 ************************************ 00:05:28.099 15:41:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:28.099 15:41:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:28.099 15:41:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:28.099 15:41:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:28.099 15:41:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.099 15:41:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.099 15:41:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 ************************************ 00:05:28.099 START TEST rpc_daemon_integrity 00:05:28.099 ************************************ 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.099 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:28.099 { 00:05:28.099 "name": "Malloc2", 00:05:28.099 "aliases": [ 00:05:28.099 "be783ff2-d038-4b1f-8f49-3faba7be9732" 00:05:28.099 ], 00:05:28.099 "product_name": "Malloc disk", 00:05:28.099 "block_size": 512, 00:05:28.099 "num_blocks": 16384, 00:05:28.099 "uuid": "be783ff2-d038-4b1f-8f49-3faba7be9732", 00:05:28.099 "assigned_rate_limits": { 00:05:28.099 "rw_ios_per_sec": 0, 00:05:28.099 "rw_mbytes_per_sec": 0, 00:05:28.099 "r_mbytes_per_sec": 0, 00:05:28.099 "w_mbytes_per_sec": 0 00:05:28.099 }, 00:05:28.099 "claimed": false, 00:05:28.099 "zoned": false, 00:05:28.099 "supported_io_types": { 00:05:28.099 "read": true, 00:05:28.099 "write": true, 00:05:28.099 "unmap": true, 00:05:28.099 "flush": true, 00:05:28.099 "reset": true, 00:05:28.099 "nvme_admin": false, 00:05:28.099 "nvme_io": false, 00:05:28.099 "nvme_io_md": false, 00:05:28.099 "write_zeroes": true, 00:05:28.099 "zcopy": true, 00:05:28.099 "get_zone_info": false, 00:05:28.099 "zone_management": false, 00:05:28.099 "zone_append": false, 00:05:28.099 "compare": false, 00:05:28.099 "compare_and_write": false, 00:05:28.099 "abort": true, 00:05:28.099 "seek_hole": false, 00:05:28.099 "seek_data": false, 00:05:28.099 "copy": true, 00:05:28.099 "nvme_iov_md": false 00:05:28.099 }, 00:05:28.099 "memory_domains": [ 00:05:28.099 { 00:05:28.099 "dma_device_id": "system", 00:05:28.099 "dma_device_type": 1 00:05:28.099 }, 00:05:28.099 { 00:05:28.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.100 "dma_device_type": 2 00:05:28.100 } 00:05:28.100 ], 00:05:28.100 "driver_specific": {} 00:05:28.100 } 00:05:28.100 ]' 00:05:28.100 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.357 [2024-07-12 15:41:57.864811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:28.357 [2024-07-12 15:41:57.864855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:28.357 [2024-07-12 15:41:57.864878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xad8080 00:05:28.357 [2024-07-12 15:41:57.864890] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:28.357 [2024-07-12 15:41:57.866074] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:28.357 [2024-07-12 15:41:57.866099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:28.357 Passthru0 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:28.357 { 00:05:28.357 "name": "Malloc2", 00:05:28.357 "aliases": [ 00:05:28.357 "be783ff2-d038-4b1f-8f49-3faba7be9732" 00:05:28.357 ], 00:05:28.357 "product_name": "Malloc disk", 00:05:28.357 "block_size": 512, 00:05:28.357 "num_blocks": 16384, 00:05:28.357 "uuid": "be783ff2-d038-4b1f-8f49-3faba7be9732", 00:05:28.357 "assigned_rate_limits": { 00:05:28.357 "rw_ios_per_sec": 0, 00:05:28.357 "rw_mbytes_per_sec": 0, 00:05:28.357 "r_mbytes_per_sec": 0, 00:05:28.357 "w_mbytes_per_sec": 0 00:05:28.357 }, 00:05:28.357 "claimed": true, 00:05:28.357 "claim_type": "exclusive_write", 00:05:28.357 "zoned": false, 00:05:28.357 "supported_io_types": { 00:05:28.357 "read": true, 00:05:28.357 "write": true, 00:05:28.357 "unmap": true, 00:05:28.357 "flush": true, 00:05:28.357 "reset": true, 00:05:28.357 "nvme_admin": false, 00:05:28.357 "nvme_io": false, 00:05:28.357 "nvme_io_md": false, 00:05:28.357 "write_zeroes": true, 00:05:28.357 "zcopy": true, 00:05:28.357 "get_zone_info": false, 00:05:28.357 "zone_management": false, 00:05:28.357 "zone_append": false, 00:05:28.357 "compare": false, 00:05:28.357 "compare_and_write": false, 00:05:28.357 "abort": true, 00:05:28.357 "seek_hole": false, 00:05:28.357 "seek_data": false, 00:05:28.357 "copy": true, 00:05:28.357 "nvme_iov_md": false 00:05:28.357 }, 00:05:28.357 "memory_domains": [ 00:05:28.357 { 00:05:28.357 "dma_device_id": "system", 00:05:28.357 "dma_device_type": 1 00:05:28.357 }, 00:05:28.357 { 00:05:28.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.357 "dma_device_type": 2 00:05:28.357 } 00:05:28.357 ], 00:05:28.357 "driver_specific": {} 00:05:28.357 }, 00:05:28.357 { 00:05:28.357 "name": "Passthru0", 00:05:28.357 "aliases": [ 00:05:28.357 "487c1ee2-8a68-56f8-887f-13ce59cd49f7" 00:05:28.357 ], 00:05:28.357 "product_name": "passthru", 00:05:28.357 "block_size": 512, 00:05:28.357 "num_blocks": 16384, 00:05:28.357 "uuid": "487c1ee2-8a68-56f8-887f-13ce59cd49f7", 00:05:28.357 "assigned_rate_limits": { 00:05:28.357 "rw_ios_per_sec": 0, 00:05:28.357 "rw_mbytes_per_sec": 0, 00:05:28.357 "r_mbytes_per_sec": 0, 00:05:28.357 "w_mbytes_per_sec": 0 00:05:28.357 }, 00:05:28.357 "claimed": false, 00:05:28.357 "zoned": false, 00:05:28.357 "supported_io_types": { 00:05:28.357 "read": true, 00:05:28.357 "write": true, 00:05:28.357 "unmap": true, 00:05:28.357 "flush": true, 00:05:28.357 "reset": true, 00:05:28.357 "nvme_admin": false, 00:05:28.357 "nvme_io": false, 00:05:28.357 "nvme_io_md": false, 00:05:28.357 "write_zeroes": true, 00:05:28.357 "zcopy": true, 00:05:28.357 "get_zone_info": false, 00:05:28.357 "zone_management": false, 00:05:28.357 "zone_append": false, 00:05:28.357 "compare": false, 00:05:28.357 "compare_and_write": false, 00:05:28.357 "abort": true, 00:05:28.357 "seek_hole": false, 00:05:28.357 "seek_data": false, 00:05:28.357 "copy": true, 00:05:28.357 "nvme_iov_md": false 00:05:28.357 }, 00:05:28.357 "memory_domains": [ 00:05:28.357 { 00:05:28.357 "dma_device_id": "system", 00:05:28.357 "dma_device_type": 1 00:05:28.357 }, 00:05:28.357 { 00:05:28.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.357 "dma_device_type": 2 00:05:28.357 } 00:05:28.357 ], 00:05:28.357 "driver_specific": { 00:05:28.357 "passthru": { 00:05:28.357 "name": "Passthru0", 00:05:28.357 "base_bdev_name": "Malloc2" 00:05:28.357 } 00:05:28.357 } 00:05:28.357 } 00:05:28.357 ]' 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.357 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.358 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.358 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:28.358 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:28.358 15:41:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:28.358 00:05:28.358 real 0m0.221s 00:05:28.358 user 0m0.135s 00:05:28.358 sys 0m0.026s 00:05:28.358 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.358 15:41:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.358 ************************************ 00:05:28.358 END TEST rpc_daemon_integrity 00:05:28.358 ************************************ 00:05:28.358 15:41:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:28.358 15:41:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:28.358 15:41:58 rpc -- rpc/rpc.sh@84 -- # killprocess 4098136 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@948 -- # '[' -z 4098136 ']' 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@952 -- # kill -0 4098136 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4098136 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4098136' 00:05:28.358 killing process with pid 4098136 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@967 -- # kill 4098136 00:05:28.358 15:41:58 rpc -- common/autotest_common.sh@972 -- # wait 4098136 00:05:28.922 00:05:28.922 real 0m1.898s 00:05:28.922 user 0m2.343s 00:05:28.922 sys 0m0.589s 00:05:28.922 15:41:58 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.922 15:41:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.922 ************************************ 00:05:28.922 END TEST rpc 00:05:28.922 ************************************ 00:05:28.922 15:41:58 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.922 15:41:58 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:28.922 15:41:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.922 15:41:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.922 15:41:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.922 ************************************ 00:05:28.922 START TEST skip_rpc 00:05:28.922 ************************************ 00:05:28.922 15:41:58 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:28.922 * Looking for test storage... 00:05:28.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:28.922 15:41:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.922 15:41:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.922 15:41:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:28.922 15:41:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.922 15:41:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.922 15:41:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.922 ************************************ 00:05:28.922 START TEST skip_rpc 00:05:28.922 ************************************ 00:05:28.922 15:41:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:28.922 15:41:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4098568 00:05:28.922 15:41:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:28.922 15:41:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.922 15:41:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:28.922 [2024-07-12 15:41:58.647179] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:28.922 [2024-07-12 15:41:58.647260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098568 ] 00:05:29.179 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.179 [2024-07-12 15:41:58.703850] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.179 [2024-07-12 15:41:58.812236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.466 15:42:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:34.466 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4098568 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 4098568 ']' 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 4098568 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4098568 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4098568' 00:05:34.467 killing process with pid 4098568 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 4098568 00:05:34.467 15:42:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 4098568 00:05:34.467 00:05:34.467 real 0m5.486s 00:05:34.467 user 0m5.191s 00:05:34.467 sys 0m0.300s 00:05:34.467 15:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.467 15:42:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.467 ************************************ 00:05:34.467 END TEST skip_rpc 00:05:34.467 ************************************ 00:05:34.467 15:42:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:34.467 15:42:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:34.467 15:42:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.467 15:42:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.467 15:42:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.467 ************************************ 00:05:34.467 START TEST skip_rpc_with_json 00:05:34.467 ************************************ 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4099368 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4099368 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 4099368 ']' 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.467 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.467 [2024-07-12 15:42:04.184806] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:34.467 [2024-07-12 15:42:04.184903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4099368 ] 00:05:34.726 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.726 [2024-07-12 15:42:04.243350] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.726 [2024-07-12 15:42:04.346459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.984 [2024-07-12 15:42:04.593631] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:34.984 request: 00:05:34.984 { 00:05:34.984 "trtype": "tcp", 00:05:34.984 "method": "nvmf_get_transports", 00:05:34.984 "req_id": 1 00:05:34.984 } 00:05:34.984 Got JSON-RPC error response 00:05:34.984 response: 00:05:34.984 { 00:05:34.984 "code": -19, 00:05:34.984 "message": "No such device" 00:05:34.984 } 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.984 [2024-07-12 15:42:04.601756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.984 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.242 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.242 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.242 { 00:05:35.242 "subsystems": [ 00:05:35.242 { 00:05:35.242 "subsystem": "vfio_user_target", 00:05:35.242 "config": null 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "keyring", 00:05:35.242 "config": [] 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "iobuf", 00:05:35.242 "config": [ 00:05:35.242 { 00:05:35.242 "method": "iobuf_set_options", 00:05:35.242 "params": { 00:05:35.242 "small_pool_count": 8192, 00:05:35.242 "large_pool_count": 1024, 00:05:35.242 "small_bufsize": 8192, 00:05:35.242 "large_bufsize": 135168 00:05:35.242 } 00:05:35.242 } 00:05:35.242 ] 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "sock", 00:05:35.242 "config": [ 00:05:35.242 { 00:05:35.242 "method": "sock_set_default_impl", 00:05:35.242 "params": { 00:05:35.242 "impl_name": "posix" 00:05:35.242 } 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "method": "sock_impl_set_options", 00:05:35.242 "params": { 00:05:35.242 "impl_name": "ssl", 00:05:35.242 "recv_buf_size": 4096, 00:05:35.242 "send_buf_size": 4096, 00:05:35.242 "enable_recv_pipe": true, 00:05:35.242 "enable_quickack": false, 00:05:35.242 "enable_placement_id": 0, 00:05:35.242 "enable_zerocopy_send_server": true, 00:05:35.242 "enable_zerocopy_send_client": false, 00:05:35.242 "zerocopy_threshold": 0, 00:05:35.242 "tls_version": 0, 00:05:35.242 "enable_ktls": false 00:05:35.242 } 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "method": "sock_impl_set_options", 00:05:35.242 "params": { 00:05:35.242 "impl_name": "posix", 00:05:35.242 "recv_buf_size": 2097152, 00:05:35.242 "send_buf_size": 2097152, 00:05:35.242 "enable_recv_pipe": true, 00:05:35.242 "enable_quickack": false, 00:05:35.242 "enable_placement_id": 0, 00:05:35.242 "enable_zerocopy_send_server": true, 00:05:35.242 "enable_zerocopy_send_client": false, 00:05:35.242 "zerocopy_threshold": 0, 00:05:35.242 "tls_version": 0, 00:05:35.242 "enable_ktls": false 00:05:35.242 } 00:05:35.242 } 00:05:35.242 ] 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "vmd", 00:05:35.242 "config": [] 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "accel", 00:05:35.242 "config": [ 00:05:35.242 { 00:05:35.242 "method": "accel_set_options", 00:05:35.242 "params": { 00:05:35.242 "small_cache_size": 128, 00:05:35.242 "large_cache_size": 16, 00:05:35.242 "task_count": 2048, 00:05:35.242 "sequence_count": 2048, 00:05:35.242 "buf_count": 2048 00:05:35.242 } 00:05:35.242 } 00:05:35.242 ] 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "bdev", 00:05:35.242 "config": [ 00:05:35.242 { 00:05:35.242 "method": "bdev_set_options", 00:05:35.242 "params": { 00:05:35.242 "bdev_io_pool_size": 65535, 00:05:35.242 "bdev_io_cache_size": 256, 00:05:35.242 "bdev_auto_examine": true, 00:05:35.242 "iobuf_small_cache_size": 128, 00:05:35.242 "iobuf_large_cache_size": 16 00:05:35.242 } 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "method": "bdev_raid_set_options", 00:05:35.242 "params": { 00:05:35.242 "process_window_size_kb": 1024 00:05:35.242 } 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "method": "bdev_iscsi_set_options", 00:05:35.242 "params": { 00:05:35.242 "timeout_sec": 30 00:05:35.242 } 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "method": "bdev_nvme_set_options", 00:05:35.242 "params": { 00:05:35.242 "action_on_timeout": "none", 00:05:35.242 "timeout_us": 0, 00:05:35.242 "timeout_admin_us": 0, 00:05:35.242 "keep_alive_timeout_ms": 10000, 00:05:35.242 "arbitration_burst": 0, 00:05:35.242 "low_priority_weight": 0, 00:05:35.242 "medium_priority_weight": 0, 00:05:35.242 "high_priority_weight": 0, 00:05:35.242 "nvme_adminq_poll_period_us": 10000, 00:05:35.242 "nvme_ioq_poll_period_us": 0, 00:05:35.242 "io_queue_requests": 0, 00:05:35.242 "delay_cmd_submit": true, 00:05:35.242 "transport_retry_count": 4, 00:05:35.242 "bdev_retry_count": 3, 00:05:35.242 "transport_ack_timeout": 0, 00:05:35.242 "ctrlr_loss_timeout_sec": 0, 00:05:35.242 "reconnect_delay_sec": 0, 00:05:35.242 "fast_io_fail_timeout_sec": 0, 00:05:35.242 "disable_auto_failback": false, 00:05:35.242 "generate_uuids": false, 00:05:35.242 "transport_tos": 0, 00:05:35.242 "nvme_error_stat": false, 00:05:35.242 "rdma_srq_size": 0, 00:05:35.242 "io_path_stat": false, 00:05:35.242 "allow_accel_sequence": false, 00:05:35.242 "rdma_max_cq_size": 0, 00:05:35.242 "rdma_cm_event_timeout_ms": 0, 00:05:35.242 "dhchap_digests": [ 00:05:35.242 "sha256", 00:05:35.242 "sha384", 00:05:35.242 "sha512" 00:05:35.242 ], 00:05:35.242 "dhchap_dhgroups": [ 00:05:35.242 "null", 00:05:35.242 "ffdhe2048", 00:05:35.242 "ffdhe3072", 00:05:35.242 "ffdhe4096", 00:05:35.242 "ffdhe6144", 00:05:35.242 "ffdhe8192" 00:05:35.242 ] 00:05:35.242 } 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "method": "bdev_nvme_set_hotplug", 00:05:35.242 "params": { 00:05:35.242 "period_us": 100000, 00:05:35.242 "enable": false 00:05:35.242 } 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "method": "bdev_wait_for_examine" 00:05:35.242 } 00:05:35.242 ] 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "scsi", 00:05:35.242 "config": null 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "scheduler", 00:05:35.242 "config": [ 00:05:35.242 { 00:05:35.242 "method": "framework_set_scheduler", 00:05:35.242 "params": { 00:05:35.242 "name": "static" 00:05:35.242 } 00:05:35.242 } 00:05:35.242 ] 00:05:35.242 }, 00:05:35.242 { 00:05:35.242 "subsystem": "vhost_scsi", 00:05:35.242 "config": [] 00:05:35.242 }, 00:05:35.242 { 00:05:35.243 "subsystem": "vhost_blk", 00:05:35.243 "config": [] 00:05:35.243 }, 00:05:35.243 { 00:05:35.243 "subsystem": "ublk", 00:05:35.243 "config": [] 00:05:35.243 }, 00:05:35.243 { 00:05:35.243 "subsystem": "nbd", 00:05:35.243 "config": [] 00:05:35.243 }, 00:05:35.243 { 00:05:35.243 "subsystem": "nvmf", 00:05:35.243 "config": [ 00:05:35.243 { 00:05:35.243 "method": "nvmf_set_config", 00:05:35.243 "params": { 00:05:35.243 "discovery_filter": "match_any", 00:05:35.243 "admin_cmd_passthru": { 00:05:35.243 "identify_ctrlr": false 00:05:35.243 } 00:05:35.243 } 00:05:35.243 }, 00:05:35.243 { 00:05:35.243 "method": "nvmf_set_max_subsystems", 00:05:35.243 "params": { 00:05:35.243 "max_subsystems": 1024 00:05:35.243 } 00:05:35.243 }, 00:05:35.243 { 00:05:35.243 "method": "nvmf_set_crdt", 00:05:35.243 "params": { 00:05:35.243 "crdt1": 0, 00:05:35.243 "crdt2": 0, 00:05:35.243 "crdt3": 0 00:05:35.243 } 00:05:35.243 }, 00:05:35.243 { 00:05:35.243 "method": "nvmf_create_transport", 00:05:35.243 "params": { 00:05:35.243 "trtype": "TCP", 00:05:35.243 "max_queue_depth": 128, 00:05:35.243 "max_io_qpairs_per_ctrlr": 127, 00:05:35.243 "in_capsule_data_size": 4096, 00:05:35.243 "max_io_size": 131072, 00:05:35.243 "io_unit_size": 131072, 00:05:35.243 "max_aq_depth": 128, 00:05:35.243 "num_shared_buffers": 511, 00:05:35.243 "buf_cache_size": 4294967295, 00:05:35.243 "dif_insert_or_strip": false, 00:05:35.243 "zcopy": false, 00:05:35.243 "c2h_success": true, 00:05:35.243 "sock_priority": 0, 00:05:35.243 "abort_timeout_sec": 1, 00:05:35.243 "ack_timeout": 0, 00:05:35.243 "data_wr_pool_size": 0 00:05:35.243 } 00:05:35.243 } 00:05:35.243 ] 00:05:35.243 }, 00:05:35.243 { 00:05:35.243 "subsystem": "iscsi", 00:05:35.243 "config": [ 00:05:35.243 { 00:05:35.243 "method": "iscsi_set_options", 00:05:35.243 "params": { 00:05:35.243 "node_base": "iqn.2016-06.io.spdk", 00:05:35.243 "max_sessions": 128, 00:05:35.243 "max_connections_per_session": 2, 00:05:35.243 "max_queue_depth": 64, 00:05:35.243 "default_time2wait": 2, 00:05:35.243 "default_time2retain": 20, 00:05:35.243 "first_burst_length": 8192, 00:05:35.243 "immediate_data": true, 00:05:35.243 "allow_duplicated_isid": false, 00:05:35.243 "error_recovery_level": 0, 00:05:35.243 "nop_timeout": 60, 00:05:35.243 "nop_in_interval": 30, 00:05:35.243 "disable_chap": false, 00:05:35.243 "require_chap": false, 00:05:35.243 "mutual_chap": false, 00:05:35.243 "chap_group": 0, 00:05:35.243 "max_large_datain_per_connection": 64, 00:05:35.243 "max_r2t_per_connection": 4, 00:05:35.243 "pdu_pool_size": 36864, 00:05:35.243 "immediate_data_pool_size": 16384, 00:05:35.243 "data_out_pool_size": 2048 00:05:35.243 } 00:05:35.243 } 00:05:35.243 ] 00:05:35.243 } 00:05:35.243 ] 00:05:35.243 } 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4099368 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4099368 ']' 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4099368 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099368 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099368' 00:05:35.243 killing process with pid 4099368 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4099368 00:05:35.243 15:42:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4099368 00:05:35.501 15:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4099510 00:05:35.501 15:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.501 15:42:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4099510 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4099510 ']' 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4099510 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4099510 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4099510' 00:05:40.779 killing process with pid 4099510 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4099510 00:05:40.779 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4099510 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:41.037 00:05:41.037 real 0m6.555s 00:05:41.037 user 0m6.153s 00:05:41.037 sys 0m0.664s 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:41.037 ************************************ 00:05:41.037 END TEST skip_rpc_with_json 00:05:41.037 ************************************ 00:05:41.037 15:42:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.037 15:42:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:41.037 15:42:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.037 15:42:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.037 15:42:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.037 ************************************ 00:05:41.037 START TEST skip_rpc_with_delay 00:05:41.037 ************************************ 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:41.037 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:41.295 [2024-07-12 15:42:10.791089] app.c: 836:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:41.295 [2024-07-12 15:42:10.791186] app.c: 715:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:41.295 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:41.295 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.295 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:41.295 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.295 00:05:41.295 real 0m0.070s 00:05:41.295 user 0m0.045s 00:05:41.295 sys 0m0.024s 00:05:41.295 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.295 15:42:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:41.295 ************************************ 00:05:41.295 END TEST skip_rpc_with_delay 00:05:41.295 ************************************ 00:05:41.295 15:42:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.295 15:42:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:41.295 15:42:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:41.295 15:42:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:41.295 15:42:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.295 15:42:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.295 15:42:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.295 ************************************ 00:05:41.295 START TEST exit_on_failed_rpc_init 00:05:41.295 ************************************ 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4100736 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4100736 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 4100736 ']' 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.295 15:42:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.295 [2024-07-12 15:42:10.906447] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:41.295 [2024-07-12 15:42:10.906527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100736 ] 00:05:41.295 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.295 [2024-07-12 15:42:10.962964] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.553 [2024-07-12 15:42:11.074656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:41.810 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.810 [2024-07-12 15:42:11.374479] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:41.810 [2024-07-12 15:42:11.374564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100749 ] 00:05:41.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.810 [2024-07-12 15:42:11.434722] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.068 [2024-07-12 15:42:11.544865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.068 [2024-07-12 15:42:11.544973] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:42.068 [2024-07-12 15:42:11.544993] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:42.068 [2024-07-12 15:42:11.545004] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4100736 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 4100736 ']' 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 4100736 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4100736 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4100736' 00:05:42.068 killing process with pid 4100736 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 4100736 00:05:42.068 15:42:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 4100736 00:05:42.634 00:05:42.634 real 0m1.287s 00:05:42.634 user 0m1.457s 00:05:42.634 sys 0m0.438s 00:05:42.634 15:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.634 15:42:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.634 ************************************ 00:05:42.634 END TEST exit_on_failed_rpc_init 00:05:42.634 ************************************ 00:05:42.634 15:42:12 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.634 15:42:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:42.634 00:05:42.634 real 0m13.652s 00:05:42.634 user 0m12.953s 00:05:42.634 sys 0m1.589s 00:05:42.634 15:42:12 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.634 15:42:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.634 ************************************ 00:05:42.634 END TEST skip_rpc 00:05:42.634 ************************************ 00:05:42.634 15:42:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.634 15:42:12 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:42.634 15:42:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.634 15:42:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.634 15:42:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.634 ************************************ 00:05:42.634 START TEST rpc_client 00:05:42.634 ************************************ 00:05:42.634 15:42:12 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:42.634 * Looking for test storage... 00:05:42.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:42.634 15:42:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:42.634 OK 00:05:42.634 15:42:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:42.634 00:05:42.634 real 0m0.065s 00:05:42.634 user 0m0.031s 00:05:42.634 sys 0m0.038s 00:05:42.634 15:42:12 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.634 15:42:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:42.634 ************************************ 00:05:42.634 END TEST rpc_client 00:05:42.634 ************************************ 00:05:42.634 15:42:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.634 15:42:12 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:42.634 15:42:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.634 15:42:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.634 15:42:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.634 ************************************ 00:05:42.634 START TEST json_config 00:05:42.634 ************************************ 00:05:42.634 15:42:12 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:42.892 15:42:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.892 15:42:12 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.892 15:42:12 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.892 15:42:12 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.892 15:42:12 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.892 15:42:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.893 15:42:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.893 15:42:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.893 15:42:12 json_config -- paths/export.sh@5 -- # export PATH 00:05:42.893 15:42:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@47 -- # : 0 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.893 15:42:12 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:42.893 INFO: JSON configuration test init 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.893 15:42:12 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:42.893 15:42:12 json_config -- json_config/common.sh@9 -- # local app=target 00:05:42.893 15:42:12 json_config -- json_config/common.sh@10 -- # shift 00:05:42.893 15:42:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:42.893 15:42:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:42.893 15:42:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:42.893 15:42:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.893 15:42:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.893 15:42:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4100996 00:05:42.893 15:42:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:42.893 15:42:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:42.893 Waiting for target to run... 00:05:42.893 15:42:12 json_config -- json_config/common.sh@25 -- # waitforlisten 4100996 /var/tmp/spdk_tgt.sock 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 4100996 ']' 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:42.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.893 15:42:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.893 [2024-07-12 15:42:12.435726] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:42.893 [2024-07-12 15:42:12.435813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4100996 ] 00:05:42.893 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.460 [2024-07-12 15:42:12.989679] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.460 [2024-07-12 15:42:13.083456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.717 15:42:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.717 15:42:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:43.717 15:42:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:43.717 00:05:43.717 15:42:13 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:43.717 15:42:13 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:43.717 15:42:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.717 15:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.717 15:42:13 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:43.717 15:42:13 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:43.717 15:42:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.717 15:42:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.717 15:42:13 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:43.717 15:42:13 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:43.717 15:42:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:47.000 15:42:16 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:47.000 15:42:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:47.000 15:42:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.000 15:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.000 15:42:16 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:47.000 15:42:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:47.000 15:42:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:47.000 15:42:16 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:47.000 15:42:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:47.000 15:42:16 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:47.258 15:42:16 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.258 15:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:47.258 15:42:16 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.258 15:42:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:47.258 15:42:16 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:47.258 15:42:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:47.517 MallocForNvmf0 00:05:47.517 15:42:17 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:47.517 15:42:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:47.775 MallocForNvmf1 00:05:47.775 15:42:17 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:47.775 15:42:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:48.033 [2024-07-12 15:42:17.571801] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.033 15:42:17 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.033 15:42:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:48.291 15:42:17 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:48.291 15:42:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:48.549 15:42:18 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:48.549 15:42:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:48.806 15:42:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:48.806 15:42:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:49.063 [2024-07-12 15:42:18.547150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.063 15:42:18 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:49.063 15:42:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.063 15:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.063 15:42:18 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:49.063 15:42:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.063 15:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.063 15:42:18 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:49.063 15:42:18 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.063 15:42:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:49.321 MallocBdevForConfigChangeCheck 00:05:49.321 15:42:18 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:49.321 15:42:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.321 15:42:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.321 15:42:18 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:49.321 15:42:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.578 15:42:19 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:49.578 INFO: shutting down applications... 00:05:49.578 15:42:19 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:49.578 15:42:19 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:49.578 15:42:19 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:49.578 15:42:19 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:51.475 Calling clear_iscsi_subsystem 00:05:51.475 Calling clear_nvmf_subsystem 00:05:51.475 Calling clear_nbd_subsystem 00:05:51.475 Calling clear_ublk_subsystem 00:05:51.475 Calling clear_vhost_blk_subsystem 00:05:51.475 Calling clear_vhost_scsi_subsystem 00:05:51.475 Calling clear_bdev_subsystem 00:05:51.475 15:42:20 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:51.475 15:42:20 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:51.475 15:42:20 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:51.475 15:42:20 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.475 15:42:20 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:51.475 15:42:20 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:51.475 15:42:21 json_config -- json_config/json_config.sh@345 -- # break 00:05:51.475 15:42:21 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:51.475 15:42:21 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:51.475 15:42:21 json_config -- json_config/common.sh@31 -- # local app=target 00:05:51.475 15:42:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.475 15:42:21 json_config -- json_config/common.sh@35 -- # [[ -n 4100996 ]] 00:05:51.475 15:42:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4100996 00:05:51.475 15:42:21 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.475 15:42:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.475 15:42:21 json_config -- json_config/common.sh@41 -- # kill -0 4100996 00:05:51.475 15:42:21 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.041 15:42:21 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.041 15:42:21 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.041 15:42:21 json_config -- json_config/common.sh@41 -- # kill -0 4100996 00:05:52.041 15:42:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:52.041 15:42:21 json_config -- json_config/common.sh@43 -- # break 00:05:52.041 15:42:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:52.041 15:42:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:52.041 SPDK target shutdown done 00:05:52.041 15:42:21 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:52.041 INFO: relaunching applications... 00:05:52.041 15:42:21 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.041 15:42:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:52.041 15:42:21 json_config -- json_config/common.sh@10 -- # shift 00:05:52.041 15:42:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:52.041 15:42:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:52.041 15:42:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:52.041 15:42:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.041 15:42:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.041 15:42:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4102298 00:05:52.041 15:42:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:52.041 15:42:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:52.041 Waiting for target to run... 00:05:52.041 15:42:21 json_config -- json_config/common.sh@25 -- # waitforlisten 4102298 /var/tmp/spdk_tgt.sock 00:05:52.041 15:42:21 json_config -- common/autotest_common.sh@829 -- # '[' -z 4102298 ']' 00:05:52.041 15:42:21 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.041 15:42:21 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.041 15:42:21 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.041 15:42:21 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.041 15:42:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.041 [2024-07-12 15:42:21.763488] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:52.041 [2024-07-12 15:42:21.763601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4102298 ] 00:05:52.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.866 [2024-07-12 15:42:22.289058] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.866 [2024-07-12 15:42:22.384920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.184 [2024-07-12 15:42:25.416097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.184 [2024-07-12 15:42:25.448567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:56.751 15:42:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.751 15:42:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:56.751 15:42:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:56.751 00:05:56.751 15:42:26 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:56.751 15:42:26 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:56.751 INFO: Checking if target configuration is the same... 00:05:56.751 15:42:26 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.751 15:42:26 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:56.751 15:42:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.751 + '[' 2 -ne 2 ']' 00:05:56.751 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:56.751 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:56.751 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:56.751 +++ basename /dev/fd/62 00:05:56.751 ++ mktemp /tmp/62.XXX 00:05:56.751 + tmp_file_1=/tmp/62.2YZ 00:05:56.751 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:56.751 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:56.751 + tmp_file_2=/tmp/spdk_tgt_config.json.7Aq 00:05:56.751 + ret=0 00:05:56.751 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:57.036 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:57.036 + diff -u /tmp/62.2YZ /tmp/spdk_tgt_config.json.7Aq 00:05:57.036 + echo 'INFO: JSON config files are the same' 00:05:57.036 INFO: JSON config files are the same 00:05:57.036 + rm /tmp/62.2YZ /tmp/spdk_tgt_config.json.7Aq 00:05:57.036 + exit 0 00:05:57.036 15:42:26 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:57.036 15:42:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:57.036 INFO: changing configuration and checking if this can be detected... 00:05:57.036 15:42:26 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.036 15:42:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:57.294 15:42:26 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.294 15:42:26 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:57.294 15:42:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.294 + '[' 2 -ne 2 ']' 00:05:57.294 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:57.294 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:57.294 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:57.294 +++ basename /dev/fd/62 00:05:57.294 ++ mktemp /tmp/62.XXX 00:05:57.294 + tmp_file_1=/tmp/62.qH4 00:05:57.294 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.294 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:57.294 + tmp_file_2=/tmp/spdk_tgt_config.json.Y89 00:05:57.294 + ret=0 00:05:57.294 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:57.552 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:57.811 + diff -u /tmp/62.qH4 /tmp/spdk_tgt_config.json.Y89 00:05:57.811 + ret=1 00:05:57.811 + echo '=== Start of file: /tmp/62.qH4 ===' 00:05:57.811 + cat /tmp/62.qH4 00:05:57.811 + echo '=== End of file: /tmp/62.qH4 ===' 00:05:57.811 + echo '' 00:05:57.811 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Y89 ===' 00:05:57.811 + cat /tmp/spdk_tgt_config.json.Y89 00:05:57.811 + echo '=== End of file: /tmp/spdk_tgt_config.json.Y89 ===' 00:05:57.811 + echo '' 00:05:57.811 + rm /tmp/62.qH4 /tmp/spdk_tgt_config.json.Y89 00:05:57.811 + exit 1 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:57.811 INFO: configuration change detected. 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@317 -- # [[ -n 4102298 ]] 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.811 15:42:27 json_config -- json_config/json_config.sh@323 -- # killprocess 4102298 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@948 -- # '[' -z 4102298 ']' 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@952 -- # kill -0 4102298 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@953 -- # uname 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4102298 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4102298' 00:05:57.811 killing process with pid 4102298 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@967 -- # kill 4102298 00:05:57.811 15:42:27 json_config -- common/autotest_common.sh@972 -- # wait 4102298 00:05:59.714 15:42:28 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.714 15:42:28 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:59.714 15:42:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:59.714 15:42:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.714 15:42:28 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:59.714 15:42:28 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:59.714 INFO: Success 00:05:59.714 00:05:59.715 real 0m16.619s 00:05:59.715 user 0m18.346s 00:05:59.715 sys 0m2.279s 00:05:59.715 15:42:28 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.715 15:42:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.715 ************************************ 00:05:59.715 END TEST json_config 00:05:59.715 ************************************ 00:05:59.715 15:42:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.715 15:42:28 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.715 15:42:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.715 15:42:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.715 15:42:28 -- common/autotest_common.sh@10 -- # set +x 00:05:59.715 ************************************ 00:05:59.715 START TEST json_config_extra_key 00:05:59.715 ************************************ 00:05:59.715 15:42:28 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.715 15:42:29 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.715 15:42:29 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.715 15:42:29 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.715 15:42:29 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.715 15:42:29 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.715 15:42:29 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.715 15:42:29 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:59.715 15:42:29 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.715 15:42:29 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:59.715 INFO: launching applications... 00:05:59.715 15:42:29 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4103228 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.715 Waiting for target to run... 00:05:59.715 15:42:29 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4103228 /var/tmp/spdk_tgt.sock 00:05:59.715 15:42:29 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 4103228 ']' 00:05:59.715 15:42:29 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.715 15:42:29 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.715 15:42:29 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.715 15:42:29 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.715 15:42:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:59.715 [2024-07-12 15:42:29.108145] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:05:59.715 [2024-07-12 15:42:29.108238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103228 ] 00:05:59.715 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.975 [2024-07-12 15:42:29.632220] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.233 [2024-07-12 15:42:29.727520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.491 15:42:30 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.491 15:42:30 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:00.491 00:06:00.491 15:42:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:00.491 INFO: shutting down applications... 00:06:00.491 15:42:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4103228 ]] 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4103228 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4103228 00:06:00.491 15:42:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.057 15:42:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.057 15:42:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.057 15:42:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4103228 00:06:01.057 15:42:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.057 15:42:30 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:01.057 15:42:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.057 15:42:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.057 SPDK target shutdown done 00:06:01.057 15:42:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:01.057 Success 00:06:01.057 00:06:01.057 real 0m1.583s 00:06:01.057 user 0m1.417s 00:06:01.057 sys 0m0.633s 00:06:01.057 15:42:30 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.057 15:42:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:01.057 ************************************ 00:06:01.057 END TEST json_config_extra_key 00:06:01.057 ************************************ 00:06:01.057 15:42:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:01.057 15:42:30 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.057 15:42:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.057 15:42:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.057 15:42:30 -- common/autotest_common.sh@10 -- # set +x 00:06:01.057 ************************************ 00:06:01.057 START TEST alias_rpc 00:06:01.057 ************************************ 00:06:01.057 15:42:30 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:01.057 * Looking for test storage... 00:06:01.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:01.057 15:42:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:01.057 15:42:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4103530 00:06:01.057 15:42:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:01.057 15:42:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4103530 00:06:01.057 15:42:30 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 4103530 ']' 00:06:01.057 15:42:30 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.057 15:42:30 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.057 15:42:30 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.057 15:42:30 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.057 15:42:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.057 [2024-07-12 15:42:30.732576] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:01.057 [2024-07-12 15:42:30.732672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103530 ] 00:06:01.057 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.315 [2024-07-12 15:42:30.789518] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.315 [2024-07-12 15:42:30.894618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.574 15:42:31 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.574 15:42:31 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.574 15:42:31 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:01.831 15:42:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4103530 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 4103530 ']' 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 4103530 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103530 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103530' 00:06:01.831 killing process with pid 4103530 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@967 -- # kill 4103530 00:06:01.831 15:42:31 alias_rpc -- common/autotest_common.sh@972 -- # wait 4103530 00:06:02.397 00:06:02.397 real 0m1.234s 00:06:02.397 user 0m1.334s 00:06:02.397 sys 0m0.393s 00:06:02.397 15:42:31 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.397 15:42:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.397 ************************************ 00:06:02.397 END TEST alias_rpc 00:06:02.397 ************************************ 00:06:02.397 15:42:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.397 15:42:31 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:02.397 15:42:31 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.397 15:42:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.397 15:42:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.397 15:42:31 -- common/autotest_common.sh@10 -- # set +x 00:06:02.397 ************************************ 00:06:02.397 START TEST spdkcli_tcp 00:06:02.397 ************************************ 00:06:02.397 15:42:31 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:02.398 * Looking for test storage... 00:06:02.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4103718 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:02.398 15:42:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4103718 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 4103718 ']' 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.398 15:42:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.398 [2024-07-12 15:42:32.023946] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:02.398 [2024-07-12 15:42:32.024048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103718 ] 00:06:02.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.398 [2024-07-12 15:42:32.081934] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.656 [2024-07-12 15:42:32.194735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.656 [2024-07-12 15:42:32.194739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.914 15:42:32 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.914 15:42:32 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:02.914 15:42:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4103732 00:06:02.914 15:42:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:02.914 15:42:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:03.172 [ 00:06:03.172 "bdev_malloc_delete", 00:06:03.172 "bdev_malloc_create", 00:06:03.172 "bdev_null_resize", 00:06:03.172 "bdev_null_delete", 00:06:03.172 "bdev_null_create", 00:06:03.172 "bdev_nvme_cuse_unregister", 00:06:03.172 "bdev_nvme_cuse_register", 00:06:03.172 "bdev_opal_new_user", 00:06:03.172 "bdev_opal_set_lock_state", 00:06:03.172 "bdev_opal_delete", 00:06:03.172 "bdev_opal_get_info", 00:06:03.172 "bdev_opal_create", 00:06:03.172 "bdev_nvme_opal_revert", 00:06:03.172 "bdev_nvme_opal_init", 00:06:03.172 "bdev_nvme_send_cmd", 00:06:03.172 "bdev_nvme_get_path_iostat", 00:06:03.172 "bdev_nvme_get_mdns_discovery_info", 00:06:03.172 "bdev_nvme_stop_mdns_discovery", 00:06:03.172 "bdev_nvme_start_mdns_discovery", 00:06:03.172 "bdev_nvme_set_multipath_policy", 00:06:03.172 "bdev_nvme_set_preferred_path", 00:06:03.172 "bdev_nvme_get_io_paths", 00:06:03.172 "bdev_nvme_remove_error_injection", 00:06:03.172 "bdev_nvme_add_error_injection", 00:06:03.172 "bdev_nvme_get_discovery_info", 00:06:03.172 "bdev_nvme_stop_discovery", 00:06:03.172 "bdev_nvme_start_discovery", 00:06:03.172 "bdev_nvme_get_controller_health_info", 00:06:03.172 "bdev_nvme_disable_controller", 00:06:03.172 "bdev_nvme_enable_controller", 00:06:03.172 "bdev_nvme_reset_controller", 00:06:03.172 "bdev_nvme_get_transport_statistics", 00:06:03.172 "bdev_nvme_apply_firmware", 00:06:03.172 "bdev_nvme_detach_controller", 00:06:03.172 "bdev_nvme_get_controllers", 00:06:03.172 "bdev_nvme_attach_controller", 00:06:03.172 "bdev_nvme_set_hotplug", 00:06:03.172 "bdev_nvme_set_options", 00:06:03.172 "bdev_passthru_delete", 00:06:03.172 "bdev_passthru_create", 00:06:03.172 "bdev_lvol_set_parent_bdev", 00:06:03.172 "bdev_lvol_set_parent", 00:06:03.172 "bdev_lvol_check_shallow_copy", 00:06:03.172 "bdev_lvol_start_shallow_copy", 00:06:03.172 "bdev_lvol_grow_lvstore", 00:06:03.172 "bdev_lvol_get_lvols", 00:06:03.172 "bdev_lvol_get_lvstores", 00:06:03.172 "bdev_lvol_delete", 00:06:03.172 "bdev_lvol_set_read_only", 00:06:03.172 "bdev_lvol_resize", 00:06:03.172 "bdev_lvol_decouple_parent", 00:06:03.172 "bdev_lvol_inflate", 00:06:03.173 "bdev_lvol_rename", 00:06:03.173 "bdev_lvol_clone_bdev", 00:06:03.173 "bdev_lvol_clone", 00:06:03.173 "bdev_lvol_snapshot", 00:06:03.173 "bdev_lvol_create", 00:06:03.173 "bdev_lvol_delete_lvstore", 00:06:03.173 "bdev_lvol_rename_lvstore", 00:06:03.173 "bdev_lvol_create_lvstore", 00:06:03.173 "bdev_raid_set_options", 00:06:03.173 "bdev_raid_remove_base_bdev", 00:06:03.173 "bdev_raid_add_base_bdev", 00:06:03.173 "bdev_raid_delete", 00:06:03.173 "bdev_raid_create", 00:06:03.173 "bdev_raid_get_bdevs", 00:06:03.173 "bdev_error_inject_error", 00:06:03.173 "bdev_error_delete", 00:06:03.173 "bdev_error_create", 00:06:03.173 "bdev_split_delete", 00:06:03.173 "bdev_split_create", 00:06:03.173 "bdev_delay_delete", 00:06:03.173 "bdev_delay_create", 00:06:03.173 "bdev_delay_update_latency", 00:06:03.173 "bdev_zone_block_delete", 00:06:03.173 "bdev_zone_block_create", 00:06:03.173 "blobfs_create", 00:06:03.173 "blobfs_detect", 00:06:03.173 "blobfs_set_cache_size", 00:06:03.173 "bdev_aio_delete", 00:06:03.173 "bdev_aio_rescan", 00:06:03.173 "bdev_aio_create", 00:06:03.173 "bdev_ftl_set_property", 00:06:03.173 "bdev_ftl_get_properties", 00:06:03.173 "bdev_ftl_get_stats", 00:06:03.173 "bdev_ftl_unmap", 00:06:03.173 "bdev_ftl_unload", 00:06:03.173 "bdev_ftl_delete", 00:06:03.173 "bdev_ftl_load", 00:06:03.173 "bdev_ftl_create", 00:06:03.173 "bdev_virtio_attach_controller", 00:06:03.173 "bdev_virtio_scsi_get_devices", 00:06:03.173 "bdev_virtio_detach_controller", 00:06:03.173 "bdev_virtio_blk_set_hotplug", 00:06:03.173 "bdev_iscsi_delete", 00:06:03.173 "bdev_iscsi_create", 00:06:03.173 "bdev_iscsi_set_options", 00:06:03.173 "accel_error_inject_error", 00:06:03.173 "ioat_scan_accel_module", 00:06:03.173 "dsa_scan_accel_module", 00:06:03.173 "iaa_scan_accel_module", 00:06:03.173 "vfu_virtio_create_scsi_endpoint", 00:06:03.173 "vfu_virtio_scsi_remove_target", 00:06:03.173 "vfu_virtio_scsi_add_target", 00:06:03.173 "vfu_virtio_create_blk_endpoint", 00:06:03.173 "vfu_virtio_delete_endpoint", 00:06:03.173 "keyring_file_remove_key", 00:06:03.173 "keyring_file_add_key", 00:06:03.173 "keyring_linux_set_options", 00:06:03.173 "iscsi_get_histogram", 00:06:03.173 "iscsi_enable_histogram", 00:06:03.173 "iscsi_set_options", 00:06:03.173 "iscsi_get_auth_groups", 00:06:03.173 "iscsi_auth_group_remove_secret", 00:06:03.173 "iscsi_auth_group_add_secret", 00:06:03.173 "iscsi_delete_auth_group", 00:06:03.173 "iscsi_create_auth_group", 00:06:03.173 "iscsi_set_discovery_auth", 00:06:03.173 "iscsi_get_options", 00:06:03.173 "iscsi_target_node_request_logout", 00:06:03.173 "iscsi_target_node_set_redirect", 00:06:03.173 "iscsi_target_node_set_auth", 00:06:03.173 "iscsi_target_node_add_lun", 00:06:03.173 "iscsi_get_stats", 00:06:03.173 "iscsi_get_connections", 00:06:03.173 "iscsi_portal_group_set_auth", 00:06:03.173 "iscsi_start_portal_group", 00:06:03.173 "iscsi_delete_portal_group", 00:06:03.173 "iscsi_create_portal_group", 00:06:03.173 "iscsi_get_portal_groups", 00:06:03.173 "iscsi_delete_target_node", 00:06:03.173 "iscsi_target_node_remove_pg_ig_maps", 00:06:03.173 "iscsi_target_node_add_pg_ig_maps", 00:06:03.173 "iscsi_create_target_node", 00:06:03.173 "iscsi_get_target_nodes", 00:06:03.173 "iscsi_delete_initiator_group", 00:06:03.173 "iscsi_initiator_group_remove_initiators", 00:06:03.173 "iscsi_initiator_group_add_initiators", 00:06:03.173 "iscsi_create_initiator_group", 00:06:03.173 "iscsi_get_initiator_groups", 00:06:03.173 "nvmf_set_crdt", 00:06:03.173 "nvmf_set_config", 00:06:03.173 "nvmf_set_max_subsystems", 00:06:03.173 "nvmf_stop_mdns_prr", 00:06:03.173 "nvmf_publish_mdns_prr", 00:06:03.173 "nvmf_subsystem_get_listeners", 00:06:03.173 "nvmf_subsystem_get_qpairs", 00:06:03.173 "nvmf_subsystem_get_controllers", 00:06:03.173 "nvmf_get_stats", 00:06:03.173 "nvmf_get_transports", 00:06:03.173 "nvmf_create_transport", 00:06:03.173 "nvmf_get_targets", 00:06:03.173 "nvmf_delete_target", 00:06:03.173 "nvmf_create_target", 00:06:03.173 "nvmf_subsystem_allow_any_host", 00:06:03.173 "nvmf_subsystem_remove_host", 00:06:03.173 "nvmf_subsystem_add_host", 00:06:03.173 "nvmf_ns_remove_host", 00:06:03.173 "nvmf_ns_add_host", 00:06:03.173 "nvmf_subsystem_remove_ns", 00:06:03.173 "nvmf_subsystem_add_ns", 00:06:03.173 "nvmf_subsystem_listener_set_ana_state", 00:06:03.173 "nvmf_discovery_get_referrals", 00:06:03.173 "nvmf_discovery_remove_referral", 00:06:03.173 "nvmf_discovery_add_referral", 00:06:03.173 "nvmf_subsystem_remove_listener", 00:06:03.173 "nvmf_subsystem_add_listener", 00:06:03.173 "nvmf_delete_subsystem", 00:06:03.173 "nvmf_create_subsystem", 00:06:03.173 "nvmf_get_subsystems", 00:06:03.173 "env_dpdk_get_mem_stats", 00:06:03.173 "nbd_get_disks", 00:06:03.173 "nbd_stop_disk", 00:06:03.173 "nbd_start_disk", 00:06:03.173 "ublk_recover_disk", 00:06:03.173 "ublk_get_disks", 00:06:03.173 "ublk_stop_disk", 00:06:03.173 "ublk_start_disk", 00:06:03.173 "ublk_destroy_target", 00:06:03.173 "ublk_create_target", 00:06:03.173 "virtio_blk_create_transport", 00:06:03.173 "virtio_blk_get_transports", 00:06:03.173 "vhost_controller_set_coalescing", 00:06:03.173 "vhost_get_controllers", 00:06:03.173 "vhost_delete_controller", 00:06:03.173 "vhost_create_blk_controller", 00:06:03.173 "vhost_scsi_controller_remove_target", 00:06:03.173 "vhost_scsi_controller_add_target", 00:06:03.173 "vhost_start_scsi_controller", 00:06:03.173 "vhost_create_scsi_controller", 00:06:03.173 "thread_set_cpumask", 00:06:03.173 "framework_get_governor", 00:06:03.173 "framework_get_scheduler", 00:06:03.173 "framework_set_scheduler", 00:06:03.173 "framework_get_reactors", 00:06:03.173 "thread_get_io_channels", 00:06:03.173 "thread_get_pollers", 00:06:03.173 "thread_get_stats", 00:06:03.173 "framework_monitor_context_switch", 00:06:03.173 "spdk_kill_instance", 00:06:03.173 "log_enable_timestamps", 00:06:03.173 "log_get_flags", 00:06:03.173 "log_clear_flag", 00:06:03.173 "log_set_flag", 00:06:03.173 "log_get_level", 00:06:03.173 "log_set_level", 00:06:03.173 "log_get_print_level", 00:06:03.173 "log_set_print_level", 00:06:03.173 "framework_enable_cpumask_locks", 00:06:03.173 "framework_disable_cpumask_locks", 00:06:03.173 "framework_wait_init", 00:06:03.173 "framework_start_init", 00:06:03.173 "scsi_get_devices", 00:06:03.173 "bdev_get_histogram", 00:06:03.173 "bdev_enable_histogram", 00:06:03.173 "bdev_set_qos_limit", 00:06:03.173 "bdev_set_qd_sampling_period", 00:06:03.173 "bdev_get_bdevs", 00:06:03.173 "bdev_reset_iostat", 00:06:03.173 "bdev_get_iostat", 00:06:03.173 "bdev_examine", 00:06:03.173 "bdev_wait_for_examine", 00:06:03.173 "bdev_set_options", 00:06:03.173 "notify_get_notifications", 00:06:03.173 "notify_get_types", 00:06:03.173 "accel_get_stats", 00:06:03.173 "accel_set_options", 00:06:03.173 "accel_set_driver", 00:06:03.173 "accel_crypto_key_destroy", 00:06:03.173 "accel_crypto_keys_get", 00:06:03.173 "accel_crypto_key_create", 00:06:03.173 "accel_assign_opc", 00:06:03.173 "accel_get_module_info", 00:06:03.173 "accel_get_opc_assignments", 00:06:03.173 "vmd_rescan", 00:06:03.173 "vmd_remove_device", 00:06:03.173 "vmd_enable", 00:06:03.173 "sock_get_default_impl", 00:06:03.173 "sock_set_default_impl", 00:06:03.173 "sock_impl_set_options", 00:06:03.173 "sock_impl_get_options", 00:06:03.173 "iobuf_get_stats", 00:06:03.173 "iobuf_set_options", 00:06:03.173 "keyring_get_keys", 00:06:03.173 "framework_get_pci_devices", 00:06:03.173 "framework_get_config", 00:06:03.173 "framework_get_subsystems", 00:06:03.173 "vfu_tgt_set_base_path", 00:06:03.173 "trace_get_info", 00:06:03.173 "trace_get_tpoint_group_mask", 00:06:03.173 "trace_disable_tpoint_group", 00:06:03.173 "trace_enable_tpoint_group", 00:06:03.173 "trace_clear_tpoint_mask", 00:06:03.173 "trace_set_tpoint_mask", 00:06:03.173 "spdk_get_version", 00:06:03.173 "rpc_get_methods" 00:06:03.173 ] 00:06:03.173 15:42:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.173 15:42:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:03.173 15:42:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4103718 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 4103718 ']' 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 4103718 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103718 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103718' 00:06:03.173 killing process with pid 4103718 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 4103718 00:06:03.173 15:42:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 4103718 00:06:03.739 00:06:03.739 real 0m1.299s 00:06:03.739 user 0m2.259s 00:06:03.739 sys 0m0.460s 00:06:03.739 15:42:33 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.739 15:42:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:03.739 ************************************ 00:06:03.739 END TEST spdkcli_tcp 00:06:03.739 ************************************ 00:06:03.739 15:42:33 -- common/autotest_common.sh@1142 -- # return 0 00:06:03.739 15:42:33 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.739 15:42:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.739 15:42:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.739 15:42:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.739 ************************************ 00:06:03.739 START TEST dpdk_mem_utility 00:06:03.739 ************************************ 00:06:03.739 15:42:33 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:03.739 * Looking for test storage... 00:06:03.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:03.739 15:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:03.739 15:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4103927 00:06:03.739 15:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.739 15:42:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4103927 00:06:03.739 15:42:33 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 4103927 ']' 00:06:03.739 15:42:33 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.739 15:42:33 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.739 15:42:33 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.739 15:42:33 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.739 15:42:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.739 [2024-07-12 15:42:33.351753] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:03.739 [2024-07-12 15:42:33.351834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4103927 ] 00:06:03.739 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.739 [2024-07-12 15:42:33.408731] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.997 [2024-07-12 15:42:33.514087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.562 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.562 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:04.562 15:42:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:04.562 15:42:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:04.562 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.562 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:04.562 { 00:06:04.562 "filename": "/tmp/spdk_mem_dump.txt" 00:06:04.562 } 00:06:04.562 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.562 15:42:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:04.821 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:04.821 1 heaps totaling size 814.000000 MiB 00:06:04.821 size: 814.000000 MiB heap id: 0 00:06:04.821 end heaps---------- 00:06:04.821 8 mempools totaling size 598.116089 MiB 00:06:04.821 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:04.821 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:04.821 size: 84.521057 MiB name: bdev_io_4103927 00:06:04.821 size: 51.011292 MiB name: evtpool_4103927 00:06:04.821 size: 50.003479 MiB name: msgpool_4103927 00:06:04.821 size: 21.763794 MiB name: PDU_Pool 00:06:04.821 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:04.821 size: 0.026123 MiB name: Session_Pool 00:06:04.821 end mempools------- 00:06:04.821 6 memzones totaling size 4.142822 MiB 00:06:04.821 size: 1.000366 MiB name: RG_ring_0_4103927 00:06:04.821 size: 1.000366 MiB name: RG_ring_1_4103927 00:06:04.821 size: 1.000366 MiB name: RG_ring_4_4103927 00:06:04.821 size: 1.000366 MiB name: RG_ring_5_4103927 00:06:04.821 size: 0.125366 MiB name: RG_ring_2_4103927 00:06:04.821 size: 0.015991 MiB name: RG_ring_3_4103927 00:06:04.821 end memzones------- 00:06:04.821 15:42:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:04.821 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:04.821 list of free elements. size: 12.519348 MiB 00:06:04.821 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:04.821 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:04.821 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:04.821 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:04.821 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:04.821 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:04.821 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:04.821 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:04.821 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:04.821 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:04.821 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:04.821 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:04.821 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:04.821 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:04.821 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:04.821 list of standard malloc elements. size: 199.218079 MiB 00:06:04.821 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:04.821 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:04.821 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:04.821 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:04.821 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:04.821 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:04.821 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:04.821 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:04.821 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:04.821 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:04.821 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:04.821 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:04.821 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:04.821 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:04.821 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:04.821 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:04.821 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:04.821 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:04.821 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:04.821 list of memzone associated elements. size: 602.262573 MiB 00:06:04.821 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:04.822 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:04.822 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:04.822 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:04.822 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:04.822 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4103927_0 00:06:04.822 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:04.822 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4103927_0 00:06:04.822 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:04.822 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4103927_0 00:06:04.822 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:04.822 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:04.822 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:04.822 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:04.822 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:04.822 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4103927 00:06:04.822 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:04.822 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4103927 00:06:04.822 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:04.822 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4103927 00:06:04.822 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:04.822 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:04.822 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:04.822 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:04.822 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:04.822 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:04.822 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:04.822 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:04.822 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:04.822 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4103927 00:06:04.822 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:04.822 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4103927 00:06:04.822 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:04.822 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4103927 00:06:04.822 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:04.822 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4103927 00:06:04.822 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:04.822 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4103927 00:06:04.822 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:04.822 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:04.822 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:04.822 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:04.822 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:04.822 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:04.822 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:04.822 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4103927 00:06:04.822 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:04.822 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:04.822 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:04.822 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:04.822 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:04.822 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4103927 00:06:04.822 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:04.822 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:04.822 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:04.822 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4103927 00:06:04.822 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:04.822 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4103927 00:06:04.822 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:04.822 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:04.822 15:42:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:04.822 15:42:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4103927 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 4103927 ']' 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 4103927 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4103927 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4103927' 00:06:04.822 killing process with pid 4103927 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 4103927 00:06:04.822 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 4103927 00:06:05.387 00:06:05.387 real 0m1.601s 00:06:05.387 user 0m1.738s 00:06:05.387 sys 0m0.437s 00:06:05.387 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.387 15:42:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.387 ************************************ 00:06:05.387 END TEST dpdk_mem_utility 00:06:05.387 ************************************ 00:06:05.387 15:42:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:05.387 15:42:34 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.387 15:42:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.387 15:42:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.387 15:42:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.387 ************************************ 00:06:05.387 START TEST event 00:06:05.387 ************************************ 00:06:05.387 15:42:34 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:05.387 * Looking for test storage... 00:06:05.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:05.387 15:42:34 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:05.387 15:42:34 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:05.387 15:42:34 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.387 15:42:34 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:05.387 15:42:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.387 15:42:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.387 ************************************ 00:06:05.387 START TEST event_perf 00:06:05.387 ************************************ 00:06:05.387 15:42:34 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:05.387 Running I/O for 1 seconds...[2024-07-12 15:42:34.990500] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:05.387 [2024-07-12 15:42:34.990563] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104233 ] 00:06:05.387 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.387 [2024-07-12 15:42:35.050130] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.645 [2024-07-12 15:42:35.163747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.645 [2024-07-12 15:42:35.163803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.645 [2024-07-12 15:42:35.163871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.645 [2024-07-12 15:42:35.163873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.578 Running I/O for 1 seconds... 00:06:06.578 lcore 0: 232561 00:06:06.578 lcore 1: 232560 00:06:06.578 lcore 2: 232560 00:06:06.578 lcore 3: 232560 00:06:06.578 done. 00:06:06.578 00:06:06.578 real 0m1.299s 00:06:06.578 user 0m4.202s 00:06:06.578 sys 0m0.092s 00:06:06.578 15:42:36 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.578 15:42:36 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.578 ************************************ 00:06:06.578 END TEST event_perf 00:06:06.578 ************************************ 00:06:06.578 15:42:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:06.578 15:42:36 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:06.578 15:42:36 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:06.578 15:42:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.578 15:42:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.836 ************************************ 00:06:06.836 START TEST event_reactor 00:06:06.836 ************************************ 00:06:06.836 15:42:36 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:06.836 [2024-07-12 15:42:36.342158] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:06.836 [2024-07-12 15:42:36.342226] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104396 ] 00:06:06.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.836 [2024-07-12 15:42:36.398813] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.836 [2024-07-12 15:42:36.502376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.209 test_start 00:06:08.209 oneshot 00:06:08.209 tick 100 00:06:08.209 tick 100 00:06:08.209 tick 250 00:06:08.209 tick 100 00:06:08.209 tick 100 00:06:08.209 tick 100 00:06:08.209 tick 250 00:06:08.209 tick 500 00:06:08.209 tick 100 00:06:08.209 tick 100 00:06:08.209 tick 250 00:06:08.209 tick 100 00:06:08.209 tick 100 00:06:08.209 test_end 00:06:08.209 00:06:08.209 real 0m1.285s 00:06:08.209 user 0m1.210s 00:06:08.209 sys 0m0.071s 00:06:08.209 15:42:37 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.209 15:42:37 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:08.209 ************************************ 00:06:08.209 END TEST event_reactor 00:06:08.209 ************************************ 00:06:08.209 15:42:37 event -- common/autotest_common.sh@1142 -- # return 0 00:06:08.209 15:42:37 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.209 15:42:37 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:08.209 15:42:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.209 15:42:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.209 ************************************ 00:06:08.209 START TEST event_reactor_perf 00:06:08.209 ************************************ 00:06:08.209 15:42:37 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:08.209 [2024-07-12 15:42:37.675789] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:08.209 [2024-07-12 15:42:37.675859] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104558 ] 00:06:08.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.209 [2024-07-12 15:42:37.734089] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.209 [2024-07-12 15:42:37.838533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.586 test_start 00:06:09.586 test_end 00:06:09.586 Performance: 446197 events per second 00:06:09.586 00:06:09.586 real 0m1.284s 00:06:09.586 user 0m1.204s 00:06:09.586 sys 0m0.076s 00:06:09.586 15:42:38 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.586 15:42:38 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.586 ************************************ 00:06:09.586 END TEST event_reactor_perf 00:06:09.586 ************************************ 00:06:09.586 15:42:38 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.586 15:42:38 event -- event/event.sh@49 -- # uname -s 00:06:09.586 15:42:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:09.586 15:42:38 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:09.586 15:42:38 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.587 15:42:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.587 15:42:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.587 ************************************ 00:06:09.587 START TEST event_scheduler 00:06:09.587 ************************************ 00:06:09.587 15:42:38 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:09.587 * Looking for test storage... 00:06:09.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:09.587 15:42:39 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:09.587 15:42:39 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4104743 00:06:09.587 15:42:39 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.587 15:42:39 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:09.587 15:42:39 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4104743 00:06:09.587 15:42:39 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 4104743 ']' 00:06:09.587 15:42:39 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.587 15:42:39 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.587 15:42:39 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.587 15:42:39 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.587 15:42:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.587 [2024-07-12 15:42:39.092211] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:09.587 [2024-07-12 15:42:39.092287] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4104743 ] 00:06:09.587 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.587 [2024-07-12 15:42:39.153492] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.587 [2024-07-12 15:42:39.261287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.587 [2024-07-12 15:42:39.261342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.587 [2024-07-12 15:42:39.261411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.587 [2024-07-12 15:42:39.265346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:10.565 15:42:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.565 [2024-07-12 15:42:40.032108] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:10.565 [2024-07-12 15:42:40.032160] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.565 [2024-07-12 15:42:40.032177] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.565 [2024-07-12 15:42:40.032188] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.565 [2024-07-12 15:42:40.032198] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.565 15:42:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.565 [2024-07-12 15:42:40.130262] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.565 15:42:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.565 15:42:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.565 ************************************ 00:06:10.565 START TEST scheduler_create_thread 00:06:10.565 ************************************ 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.565 2 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.565 3 00:06:10.565 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 4 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 5 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 6 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 7 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 8 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 9 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 10 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.566 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.128 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.128 00:06:11.128 real 0m0.591s 00:06:11.128 user 0m0.009s 00:06:11.128 sys 0m0.004s 00:06:11.128 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.128 15:42:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.128 ************************************ 00:06:11.128 END TEST scheduler_create_thread 00:06:11.128 ************************************ 00:06:11.128 15:42:40 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:11.128 15:42:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:11.129 15:42:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4104743 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 4104743 ']' 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 4104743 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4104743 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4104743' 00:06:11.129 killing process with pid 4104743 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 4104743 00:06:11.129 15:42:40 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 4104743 00:06:11.692 [2024-07-12 15:42:41.226353] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:11.950 00:06:11.950 real 0m2.482s 00:06:11.950 user 0m5.289s 00:06:11.950 sys 0m0.345s 00:06:11.950 15:42:41 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.950 15:42:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.950 ************************************ 00:06:11.950 END TEST event_scheduler 00:06:11.950 ************************************ 00:06:11.950 15:42:41 event -- common/autotest_common.sh@1142 -- # return 0 00:06:11.950 15:42:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:11.950 15:42:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:11.950 15:42:41 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.950 15:42:41 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.950 15:42:41 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.950 ************************************ 00:06:11.950 START TEST app_repeat 00:06:11.950 ************************************ 00:06:11.950 15:42:41 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4105065 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4105065' 00:06:11.950 Process app_repeat pid: 4105065 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:11.950 spdk_app_start Round 0 00:06:11.950 15:42:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4105065 /var/tmp/spdk-nbd.sock 00:06:11.950 15:42:41 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4105065 ']' 00:06:11.950 15:42:41 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.950 15:42:41 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.951 15:42:41 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.951 15:42:41 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.951 15:42:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.951 [2024-07-12 15:42:41.561495] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:11.951 [2024-07-12 15:42:41.561559] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4105065 ] 00:06:11.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.951 [2024-07-12 15:42:41.618714] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.208 [2024-07-12 15:42:41.727067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.208 [2024-07-12 15:42:41.727071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.208 15:42:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.208 15:42:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.208 15:42:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.465 Malloc0 00:06:12.465 15:42:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.723 Malloc1 00:06:12.723 15:42:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.723 15:42:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.980 /dev/nbd0 00:06:12.980 15:42:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.980 15:42:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.980 1+0 records in 00:06:12.980 1+0 records out 00:06:12.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155273 s, 26.4 MB/s 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.980 15:42:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.980 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.980 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.980 15:42:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.238 /dev/nbd1 00:06:13.238 15:42:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.238 15:42:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.238 1+0 records in 00:06:13.238 1+0 records out 00:06:13.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191656 s, 21.4 MB/s 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.238 15:42:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.238 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.238 15:42:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.238 15:42:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.238 15:42:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.238 15:42:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.495 { 00:06:13.495 "nbd_device": "/dev/nbd0", 00:06:13.495 "bdev_name": "Malloc0" 00:06:13.495 }, 00:06:13.495 { 00:06:13.495 "nbd_device": "/dev/nbd1", 00:06:13.495 "bdev_name": "Malloc1" 00:06:13.495 } 00:06:13.495 ]' 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.495 { 00:06:13.495 "nbd_device": "/dev/nbd0", 00:06:13.495 "bdev_name": "Malloc0" 00:06:13.495 }, 00:06:13.495 { 00:06:13.495 "nbd_device": "/dev/nbd1", 00:06:13.495 "bdev_name": "Malloc1" 00:06:13.495 } 00:06:13.495 ]' 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.495 /dev/nbd1' 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.495 /dev/nbd1' 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.495 256+0 records in 00:06:13.495 256+0 records out 00:06:13.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504632 s, 208 MB/s 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.495 15:42:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.751 256+0 records in 00:06:13.751 256+0 records out 00:06:13.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212832 s, 49.3 MB/s 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.751 256+0 records in 00:06:13.751 256+0 records out 00:06:13.751 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227108 s, 46.2 MB/s 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.751 15:42:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.009 15:42:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.266 15:42:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.522 15:42:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.522 15:42:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.779 15:42:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.035 [2024-07-12 15:42:44.629017] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.035 [2024-07-12 15:42:44.732502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.035 [2024-07-12 15:42:44.732502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.292 [2024-07-12 15:42:44.787669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.292 [2024-07-12 15:42:44.787732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.814 15:42:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.814 15:42:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:17.814 spdk_app_start Round 1 00:06:17.814 15:42:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4105065 /var/tmp/spdk-nbd.sock 00:06:17.814 15:42:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4105065 ']' 00:06:17.814 15:42:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.814 15:42:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.814 15:42:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.814 15:42:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.814 15:42:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.071 15:42:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.071 15:42:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:18.071 15:42:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.328 Malloc0 00:06:18.328 15:42:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.585 Malloc1 00:06:18.585 15:42:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.585 15:42:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.843 /dev/nbd0 00:06:18.843 15:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.843 15:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.843 1+0 records in 00:06:18.843 1+0 records out 00:06:18.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180733 s, 22.7 MB/s 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:18.843 15:42:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:18.843 15:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.843 15:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.843 15:42:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.100 /dev/nbd1 00:06:19.100 15:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.100 15:42:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.100 1+0 records in 00:06:19.100 1+0 records out 00:06:19.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020954 s, 19.5 MB/s 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.100 15:42:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.100 15:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.100 15:42:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.100 15:42:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.100 15:42:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.100 15:42:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.358 { 00:06:19.358 "nbd_device": "/dev/nbd0", 00:06:19.358 "bdev_name": "Malloc0" 00:06:19.358 }, 00:06:19.358 { 00:06:19.358 "nbd_device": "/dev/nbd1", 00:06:19.358 "bdev_name": "Malloc1" 00:06:19.358 } 00:06:19.358 ]' 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.358 { 00:06:19.358 "nbd_device": "/dev/nbd0", 00:06:19.358 "bdev_name": "Malloc0" 00:06:19.358 }, 00:06:19.358 { 00:06:19.358 "nbd_device": "/dev/nbd1", 00:06:19.358 "bdev_name": "Malloc1" 00:06:19.358 } 00:06:19.358 ]' 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.358 /dev/nbd1' 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.358 /dev/nbd1' 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.358 15:42:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.358 256+0 records in 00:06:19.358 256+0 records out 00:06:19.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505465 s, 207 MB/s 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.358 256+0 records in 00:06:19.358 256+0 records out 00:06:19.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205388 s, 51.1 MB/s 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.358 256+0 records in 00:06:19.358 256+0 records out 00:06:19.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226875 s, 46.2 MB/s 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.358 15:42:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.616 15:42:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.873 15:42:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.131 15:42:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.131 15:42:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.131 15:42:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.389 15:42:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.389 15:42:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.646 15:42:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.903 [2024-07-12 15:42:50.408653] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.903 [2024-07-12 15:42:50.511481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.903 [2024-07-12 15:42:50.511485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.903 [2024-07-12 15:42:50.570341] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.903 [2024-07-12 15:42:50.570415] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.428 15:42:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.428 15:42:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:23.428 spdk_app_start Round 2 00:06:23.428 15:42:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4105065 /var/tmp/spdk-nbd.sock 00:06:23.428 15:42:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4105065 ']' 00:06:23.428 15:42:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.428 15:42:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.428 15:42:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.428 15:42:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.428 15:42:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.685 15:42:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:23.685 15:42:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:23.685 15:42:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.942 Malloc0 00:06:23.942 15:42:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.201 Malloc1 00:06:24.201 15:42:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.201 15:42:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.201 15:42:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.201 15:42:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.201 15:42:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.201 15:42:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.202 15:42:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.496 /dev/nbd0 00:06:24.496 15:42:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.496 15:42:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.496 1+0 records in 00:06:24.496 1+0 records out 00:06:24.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156886 s, 26.1 MB/s 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.496 15:42:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.496 15:42:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.496 15:42:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.496 15:42:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.754 /dev/nbd1 00:06:24.754 15:42:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.754 15:42:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.754 1+0 records in 00:06:24.754 1+0 records out 00:06:24.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225399 s, 18.2 MB/s 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:24.754 15:42:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:24.754 15:42:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.754 15:42:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.754 15:42:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.754 15:42:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.754 15:42:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.012 { 00:06:25.012 "nbd_device": "/dev/nbd0", 00:06:25.012 "bdev_name": "Malloc0" 00:06:25.012 }, 00:06:25.012 { 00:06:25.012 "nbd_device": "/dev/nbd1", 00:06:25.012 "bdev_name": "Malloc1" 00:06:25.012 } 00:06:25.012 ]' 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.012 { 00:06:25.012 "nbd_device": "/dev/nbd0", 00:06:25.012 "bdev_name": "Malloc0" 00:06:25.012 }, 00:06:25.012 { 00:06:25.012 "nbd_device": "/dev/nbd1", 00:06:25.012 "bdev_name": "Malloc1" 00:06:25.012 } 00:06:25.012 ]' 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.012 /dev/nbd1' 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.012 /dev/nbd1' 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.012 15:42:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.269 256+0 records in 00:06:25.269 256+0 records out 00:06:25.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508906 s, 206 MB/s 00:06:25.269 15:42:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.269 15:42:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.269 256+0 records in 00:06:25.269 256+0 records out 00:06:25.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205628 s, 51.0 MB/s 00:06:25.269 15:42:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.269 15:42:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.269 256+0 records in 00:06:25.269 256+0 records out 00:06:25.269 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225873 s, 46.4 MB/s 00:06:25.269 15:42:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.270 15:42:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.527 15:42:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.785 15:42:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.043 15:42:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.043 15:42:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.301 15:42:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.558 [2024-07-12 15:42:56.176114] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.558 [2024-07-12 15:42:56.276995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.558 [2024-07-12 15:42:56.276999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.816 [2024-07-12 15:42:56.332705] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:26.816 [2024-07-12 15:42:56.332767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.343 15:42:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4105065 /var/tmp/spdk-nbd.sock 00:06:29.343 15:42:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4105065 ']' 00:06:29.343 15:42:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.343 15:42:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.343 15:42:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.343 15:42:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.343 15:42:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:29.606 15:42:59 event.app_repeat -- event/event.sh@39 -- # killprocess 4105065 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 4105065 ']' 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 4105065 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4105065 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4105065' 00:06:29.606 killing process with pid 4105065 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@967 -- # kill 4105065 00:06:29.606 15:42:59 event.app_repeat -- common/autotest_common.sh@972 -- # wait 4105065 00:06:29.866 spdk_app_start is called in Round 0. 00:06:29.866 Shutdown signal received, stop current app iteration 00:06:29.866 Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 reinitialization... 00:06:29.866 spdk_app_start is called in Round 1. 00:06:29.866 Shutdown signal received, stop current app iteration 00:06:29.866 Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 reinitialization... 00:06:29.866 spdk_app_start is called in Round 2. 00:06:29.866 Shutdown signal received, stop current app iteration 00:06:29.866 Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 reinitialization... 00:06:29.866 spdk_app_start is called in Round 3. 00:06:29.866 Shutdown signal received, stop current app iteration 00:06:29.866 15:42:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:29.866 15:42:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:29.866 00:06:29.866 real 0m17.892s 00:06:29.866 user 0m38.797s 00:06:29.866 sys 0m3.225s 00:06:29.866 15:42:59 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.866 15:42:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:29.866 ************************************ 00:06:29.866 END TEST app_repeat 00:06:29.866 ************************************ 00:06:29.866 15:42:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:29.866 15:42:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:29.866 15:42:59 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.866 15:42:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.866 15:42:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.866 15:42:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.866 ************************************ 00:06:29.866 START TEST cpu_locks 00:06:29.866 ************************************ 00:06:29.866 15:42:59 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:29.866 * Looking for test storage... 00:06:29.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:29.866 15:42:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:29.866 15:42:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:29.866 15:42:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:29.866 15:42:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:29.866 15:42:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.866 15:42:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.866 15:42:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.866 ************************************ 00:06:29.866 START TEST default_locks 00:06:29.866 ************************************ 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4107414 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4107414 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4107414 ']' 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.866 15:42:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.124 [2024-07-12 15:42:59.613303] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:30.124 [2024-07-12 15:42:59.613399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107414 ] 00:06:30.124 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.124 [2024-07-12 15:42:59.671066] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.124 [2024-07-12 15:42:59.781343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.382 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.382 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:30.382 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4107414 00:06:30.382 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4107414 00:06:30.382 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.944 lslocks: write error 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4107414 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 4107414 ']' 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 4107414 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4107414 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4107414' 00:06:30.944 killing process with pid 4107414 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 4107414 00:06:30.944 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 4107414 00:06:31.202 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4107414 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4107414 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4107414 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4107414 ']' 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4107414) - No such process 00:06:31.203 ERROR: process (pid: 4107414) is no longer running 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.203 00:06:31.203 real 0m1.288s 00:06:31.203 user 0m1.238s 00:06:31.203 sys 0m0.525s 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.203 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.203 ************************************ 00:06:31.203 END TEST default_locks 00:06:31.203 ************************************ 00:06:31.203 15:43:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.203 15:43:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:31.203 15:43:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.203 15:43:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.203 15:43:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.203 ************************************ 00:06:31.203 START TEST default_locks_via_rpc 00:06:31.203 ************************************ 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4107702 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4107702 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4107702 ']' 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.203 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.461 [2024-07-12 15:43:00.947072] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:31.461 [2024-07-12 15:43:00.947149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107702 ] 00:06:31.461 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.461 [2024-07-12 15:43:01.003934] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.461 [2024-07-12 15:43:01.113285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4107702 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4107702 00:06:31.720 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4107702 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 4107702 ']' 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 4107702 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4107702 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4107702' 00:06:31.978 killing process with pid 4107702 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 4107702 00:06:31.978 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 4107702 00:06:32.544 00:06:32.544 real 0m1.230s 00:06:32.544 user 0m1.193s 00:06:32.544 sys 0m0.474s 00:06:32.544 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.544 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.544 ************************************ 00:06:32.544 END TEST default_locks_via_rpc 00:06:32.544 ************************************ 00:06:32.544 15:43:02 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:32.544 15:43:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:32.544 15:43:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.544 15:43:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.544 15:43:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.544 ************************************ 00:06:32.544 START TEST non_locking_app_on_locked_coremask 00:06:32.544 ************************************ 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4107864 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4107864 /var/tmp/spdk.sock 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4107864 ']' 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.544 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.544 [2024-07-12 15:43:02.230205] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:32.544 [2024-07-12 15:43:02.230298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107864 ] 00:06:32.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.801 [2024-07-12 15:43:02.286768] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.801 [2024-07-12 15:43:02.390962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4107872 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4107872 /var/tmp/spdk2.sock 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4107872 ']' 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.060 15:43:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.060 [2024-07-12 15:43:02.690875] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:33.060 [2024-07-12 15:43:02.690971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4107872 ] 00:06:33.060 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.060 [2024-07-12 15:43:02.772829] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.060 [2024-07-12 15:43:02.772871] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.318 [2024-07-12 15:43:02.986787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.884 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.884 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:33.884 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4107864 00:06:33.884 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4107864 00:06:33.884 15:43:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.449 lslocks: write error 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4107864 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4107864 ']' 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4107864 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4107864 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4107864' 00:06:34.449 killing process with pid 4107864 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4107864 00:06:34.449 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4107864 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4107872 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4107872 ']' 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4107872 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4107872 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4107872' 00:06:35.382 killing process with pid 4107872 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4107872 00:06:35.382 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4107872 00:06:35.947 00:06:35.947 real 0m3.222s 00:06:35.947 user 0m3.369s 00:06:35.947 sys 0m1.020s 00:06:35.947 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.947 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.947 ************************************ 00:06:35.947 END TEST non_locking_app_on_locked_coremask 00:06:35.947 ************************************ 00:06:35.947 15:43:05 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:35.947 15:43:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:35.947 15:43:05 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.947 15:43:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.947 15:43:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.947 ************************************ 00:06:35.947 START TEST locking_app_on_unlocked_coremask 00:06:35.947 ************************************ 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4108298 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4108298 /var/tmp/spdk.sock 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4108298 ']' 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.947 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:35.947 [2024-07-12 15:43:05.497295] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:35.947 [2024-07-12 15:43:05.497406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108298 ] 00:06:35.947 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.947 [2024-07-12 15:43:05.552500] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:35.947 [2024-07-12 15:43:05.552531] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.947 [2024-07-12 15:43:05.654356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4108306 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4108306 /var/tmp/spdk2.sock 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4108306 ']' 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.206 15:43:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.464 [2024-07-12 15:43:05.951556] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:36.464 [2024-07-12 15:43:05.951655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108306 ] 00:06:36.464 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.464 [2024-07-12 15:43:06.034998] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.723 [2024-07-12 15:43:06.248515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.288 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.288 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:37.288 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4108306 00:06:37.288 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4108306 00:06:37.288 15:43:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.853 lslocks: write error 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4108298 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4108298 ']' 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4108298 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108298 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108298' 00:06:37.853 killing process with pid 4108298 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4108298 00:06:37.853 15:43:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4108298 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4108306 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4108306 ']' 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4108306 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108306 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108306' 00:06:38.826 killing process with pid 4108306 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4108306 00:06:38.826 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4108306 00:06:39.084 00:06:39.084 real 0m3.207s 00:06:39.084 user 0m3.366s 00:06:39.084 sys 0m1.023s 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.084 ************************************ 00:06:39.084 END TEST locking_app_on_unlocked_coremask 00:06:39.084 ************************************ 00:06:39.084 15:43:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.084 15:43:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.084 15:43:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.084 15:43:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.084 15:43:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.084 ************************************ 00:06:39.084 START TEST locking_app_on_locked_coremask 00:06:39.084 ************************************ 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4108659 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4108659 /var/tmp/spdk.sock 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4108659 ']' 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.084 15:43:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.084 [2024-07-12 15:43:08.754431] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:39.084 [2024-07-12 15:43:08.754509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108659 ] 00:06:39.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.342 [2024-07-12 15:43:08.813266] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.342 [2024-07-12 15:43:08.921700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4108749 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4108749 /var/tmp/spdk2.sock 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4108749 /var/tmp/spdk2.sock 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4108749 /var/tmp/spdk2.sock 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4108749 ']' 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.600 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.600 [2024-07-12 15:43:09.209674] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:39.600 [2024-07-12 15:43:09.209745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108749 ] 00:06:39.600 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.600 [2024-07-12 15:43:09.291545] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4108659 has claimed it. 00:06:39.600 [2024-07-12 15:43:09.291597] app.c: 906:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4108749) - No such process 00:06:40.533 ERROR: process (pid: 4108749) is no longer running 00:06:40.533 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.533 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:40.533 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:40.534 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.534 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.534 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.534 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4108659 00:06:40.534 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4108659 00:06:40.534 15:43:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.534 lslocks: write error 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4108659 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4108659 ']' 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4108659 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108659 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108659' 00:06:40.534 killing process with pid 4108659 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4108659 00:06:40.534 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4108659 00:06:41.098 00:06:41.098 real 0m1.995s 00:06:41.098 user 0m2.166s 00:06:41.098 sys 0m0.607s 00:06:41.098 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.098 15:43:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.098 ************************************ 00:06:41.098 END TEST locking_app_on_locked_coremask 00:06:41.098 ************************************ 00:06:41.098 15:43:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.098 15:43:10 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:41.098 15:43:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.098 15:43:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.098 15:43:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.098 ************************************ 00:06:41.098 START TEST locking_overlapped_coremask 00:06:41.098 ************************************ 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4108911 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4108911 /var/tmp/spdk.sock 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4108911 ']' 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.098 15:43:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.098 [2024-07-12 15:43:10.796145] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:41.098 [2024-07-12 15:43:10.796224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4108911 ] 00:06:41.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.355 [2024-07-12 15:43:10.855306] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.355 [2024-07-12 15:43:10.968364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.355 [2024-07-12 15:43:10.968388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.355 [2024-07-12 15:43:10.968391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4109044 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4109044 /var/tmp/spdk2.sock 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4109044 /var/tmp/spdk2.sock 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4109044 /var/tmp/spdk2.sock 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4109044 ']' 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.613 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.613 [2024-07-12 15:43:11.278992] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:41.613 [2024-07-12 15:43:11.279083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109044 ] 00:06:41.613 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.871 [2024-07-12 15:43:11.368360] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4108911 has claimed it. 00:06:41.871 [2024-07-12 15:43:11.368422] app.c: 906:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4109044) - No such process 00:06:42.436 ERROR: process (pid: 4109044) is no longer running 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4108911 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 4108911 ']' 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 4108911 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4108911 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4108911' 00:06:42.436 killing process with pid 4108911 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 4108911 00:06:42.436 15:43:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 4108911 00:06:42.999 00:06:42.999 real 0m1.684s 00:06:42.999 user 0m4.475s 00:06:42.999 sys 0m0.448s 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.999 ************************************ 00:06:42.999 END TEST locking_overlapped_coremask 00:06:42.999 ************************************ 00:06:42.999 15:43:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:42.999 15:43:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.999 15:43:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.999 15:43:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.999 15:43:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.999 ************************************ 00:06:42.999 START TEST locking_overlapped_coremask_via_rpc 00:06:42.999 ************************************ 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4109210 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4109210 /var/tmp/spdk.sock 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4109210 ']' 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.999 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.999 [2024-07-12 15:43:12.532628] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:42.999 [2024-07-12 15:43:12.532703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109210 ] 00:06:42.999 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.999 [2024-07-12 15:43:12.589463] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.999 [2024-07-12 15:43:12.589499] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.999 [2024-07-12 15:43:12.700748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.999 [2024-07-12 15:43:12.700813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.999 [2024-07-12 15:43:12.700817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4109216 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4109216 /var/tmp/spdk2.sock 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4109216 ']' 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.256 15:43:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.515 [2024-07-12 15:43:13.015754] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:43.515 [2024-07-12 15:43:13.015848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109216 ] 00:06:43.515 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.515 [2024-07-12 15:43:13.102585] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.515 [2024-07-12 15:43:13.102619] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.774 [2024-07-12 15:43:13.327358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.774 [2024-07-12 15:43:13.327423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.774 [2024-07-12 15:43:13.327426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.337 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.337 [2024-07-12 15:43:13.957423] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4109210 has claimed it. 00:06:44.337 request: 00:06:44.338 { 00:06:44.338 "method": "framework_enable_cpumask_locks", 00:06:44.338 "req_id": 1 00:06:44.338 } 00:06:44.338 Got JSON-RPC error response 00:06:44.338 response: 00:06:44.338 { 00:06:44.338 "code": -32603, 00:06:44.338 "message": "Failed to claim CPU core: 2" 00:06:44.338 } 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4109210 /var/tmp/spdk.sock 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4109210 ']' 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.338 15:43:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4109216 /var/tmp/spdk2.sock 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4109216 ']' 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.595 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.853 00:06:44.853 real 0m1.994s 00:06:44.853 user 0m1.028s 00:06:44.853 sys 0m0.171s 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.853 15:43:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.853 ************************************ 00:06:44.853 END TEST locking_overlapped_coremask_via_rpc 00:06:44.853 ************************************ 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.853 15:43:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:44.853 15:43:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4109210 ]] 00:06:44.853 15:43:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4109210 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4109210 ']' 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4109210 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4109210 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4109210' 00:06:44.853 killing process with pid 4109210 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4109210 00:06:44.853 15:43:14 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4109210 00:06:45.418 15:43:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4109216 ]] 00:06:45.418 15:43:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4109216 00:06:45.418 15:43:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4109216 ']' 00:06:45.418 15:43:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4109216 00:06:45.418 15:43:14 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:45.418 15:43:14 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.418 15:43:14 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4109216 00:06:45.418 15:43:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:45.418 15:43:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:45.418 15:43:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4109216' 00:06:45.418 killing process with pid 4109216 00:06:45.418 15:43:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4109216 00:06:45.418 15:43:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4109216 00:06:45.983 15:43:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.983 15:43:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.983 15:43:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4109210 ]] 00:06:45.983 15:43:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4109210 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4109210 ']' 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4109210 00:06:45.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4109210) - No such process 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4109210 is not found' 00:06:45.983 Process with pid 4109210 is not found 00:06:45.983 15:43:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4109216 ]] 00:06:45.983 15:43:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4109216 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4109216 ']' 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4109216 00:06:45.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4109216) - No such process 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4109216 is not found' 00:06:45.983 Process with pid 4109216 is not found 00:06:45.983 15:43:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.983 00:06:45.983 real 0m15.990s 00:06:45.983 user 0m27.844s 00:06:45.983 sys 0m5.169s 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.983 15:43:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.983 ************************************ 00:06:45.983 END TEST cpu_locks 00:06:45.984 ************************************ 00:06:45.984 15:43:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:45.984 00:06:45.984 real 0m40.590s 00:06:45.984 user 1m18.695s 00:06:45.984 sys 0m9.211s 00:06:45.984 15:43:15 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.984 15:43:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.984 ************************************ 00:06:45.984 END TEST event 00:06:45.984 ************************************ 00:06:45.984 15:43:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:45.984 15:43:15 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:45.984 15:43:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.984 15:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.984 15:43:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.984 ************************************ 00:06:45.984 START TEST thread 00:06:45.984 ************************************ 00:06:45.984 15:43:15 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:45.984 * Looking for test storage... 00:06:45.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:45.984 15:43:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.984 15:43:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:45.984 15:43:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.984 15:43:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.984 ************************************ 00:06:45.984 START TEST thread_poller_perf 00:06:45.984 ************************************ 00:06:45.984 15:43:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:45.984 [2024-07-12 15:43:15.633856] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:45.984 [2024-07-12 15:43:15.633928] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109702 ] 00:06:45.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.984 [2024-07-12 15:43:15.690837] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.242 [2024-07-12 15:43:15.794463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.242 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.612 ====================================== 00:06:47.612 busy:2710903773 (cyc) 00:06:47.612 total_run_count: 361000 00:06:47.612 tsc_hz: 2700000000 (cyc) 00:06:47.612 ====================================== 00:06:47.612 poller_cost: 7509 (cyc), 2781 (nsec) 00:06:47.612 00:06:47.612 real 0m1.294s 00:06:47.612 user 0m1.213s 00:06:47.612 sys 0m0.075s 00:06:47.612 15:43:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.612 15:43:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.612 ************************************ 00:06:47.612 END TEST thread_poller_perf 00:06:47.612 ************************************ 00:06:47.612 15:43:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:47.612 15:43:16 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.612 15:43:16 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:47.612 15:43:16 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.612 15:43:16 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.612 ************************************ 00:06:47.612 START TEST thread_poller_perf 00:06:47.612 ************************************ 00:06:47.612 15:43:16 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.612 [2024-07-12 15:43:16.975407] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:47.612 [2024-07-12 15:43:16.975474] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4109863 ] 00:06:47.612 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.612 [2024-07-12 15:43:17.032911] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.612 [2024-07-12 15:43:17.138083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.612 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.544 ====================================== 00:06:48.544 busy:2702420607 (cyc) 00:06:48.544 total_run_count: 4681000 00:06:48.544 tsc_hz: 2700000000 (cyc) 00:06:48.544 ====================================== 00:06:48.544 poller_cost: 577 (cyc), 213 (nsec) 00:06:48.544 00:06:48.544 real 0m1.288s 00:06:48.544 user 0m1.203s 00:06:48.544 sys 0m0.079s 00:06:48.544 15:43:18 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.544 15:43:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.544 ************************************ 00:06:48.544 END TEST thread_poller_perf 00:06:48.544 ************************************ 00:06:48.544 15:43:18 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:48.802 15:43:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:48.802 00:06:48.802 real 0m2.735s 00:06:48.802 user 0m2.481s 00:06:48.802 sys 0m0.255s 00:06:48.802 15:43:18 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.802 15:43:18 thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.802 ************************************ 00:06:48.802 END TEST thread 00:06:48.802 ************************************ 00:06:48.802 15:43:18 -- common/autotest_common.sh@1142 -- # return 0 00:06:48.802 15:43:18 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:48.802 15:43:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.802 15:43:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.802 15:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.802 ************************************ 00:06:48.802 START TEST accel 00:06:48.802 ************************************ 00:06:48.802 15:43:18 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:48.802 * Looking for test storage... 00:06:48.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:48.802 15:43:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:48.802 15:43:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:48.802 15:43:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.802 15:43:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4110061 00:06:48.802 15:43:18 accel -- accel/accel.sh@63 -- # waitforlisten 4110061 00:06:48.802 15:43:18 accel -- common/autotest_common.sh@829 -- # '[' -z 4110061 ']' 00:06:48.802 15:43:18 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:48.802 15:43:18 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.802 15:43:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:48.802 15:43:18 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.802 15:43:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.802 15:43:18 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.802 15:43:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.802 15:43:18 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.802 15:43:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.802 15:43:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.802 15:43:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.802 15:43:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.802 15:43:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:48.802 15:43:18 accel -- accel/accel.sh@41 -- # jq -r . 00:06:48.802 [2024-07-12 15:43:18.428881] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:48.802 [2024-07-12 15:43:18.428972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110061 ] 00:06:48.802 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.802 [2024-07-12 15:43:18.486401] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.061 [2024-07-12 15:43:18.600246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@862 -- # return 0 00:06:49.319 15:43:18 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:49.319 15:43:18 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:49.319 15:43:18 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:49.319 15:43:18 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:49.319 15:43:18 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:49.319 15:43:18 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:49.319 15:43:18 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # IFS== 00:06:49.319 15:43:18 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:49.319 15:43:18 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:49.319 15:43:18 accel -- accel/accel.sh@75 -- # killprocess 4110061 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@948 -- # '[' -z 4110061 ']' 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@952 -- # kill -0 4110061 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@953 -- # uname 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4110061 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4110061' 00:06:49.319 killing process with pid 4110061 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@967 -- # kill 4110061 00:06:49.319 15:43:18 accel -- common/autotest_common.sh@972 -- # wait 4110061 00:06:49.883 15:43:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:49.883 15:43:19 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:49.883 15:43:19 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:49.883 15:43:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.883 15:43:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.883 15:43:19 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:49.883 15:43:19 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:49.883 15:43:19 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.883 15:43:19 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:49.883 15:43:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.883 15:43:19 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:49.883 15:43:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.883 15:43:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.883 15:43:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.883 ************************************ 00:06:49.883 START TEST accel_missing_filename 00:06:49.883 ************************************ 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.883 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:49.883 15:43:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:49.883 15:43:19 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:49.883 15:43:19 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.883 15:43:19 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.883 15:43:19 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.884 15:43:19 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.884 15:43:19 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.884 15:43:19 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:49.884 15:43:19 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:49.884 [2024-07-12 15:43:19.457732] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:49.884 [2024-07-12 15:43:19.457797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110229 ] 00:06:49.884 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.884 [2024-07-12 15:43:19.516084] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.141 [2024-07-12 15:43:19.622630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.141 [2024-07-12 15:43:19.680140] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.141 [2024-07-12 15:43:19.764850] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:50.398 A filename is required. 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.398 00:06:50.398 real 0m0.441s 00:06:50.398 user 0m0.343s 00:06:50.398 sys 0m0.133s 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.398 15:43:19 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:50.398 ************************************ 00:06:50.398 END TEST accel_missing_filename 00:06:50.398 ************************************ 00:06:50.398 15:43:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.398 15:43:19 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.398 15:43:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:50.398 15:43:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.398 15:43:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.398 ************************************ 00:06:50.398 START TEST accel_compress_verify 00:06:50.398 ************************************ 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.398 15:43:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:50.398 15:43:19 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:50.398 [2024-07-12 15:43:19.945240] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:50.398 [2024-07-12 15:43:19.945304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110252 ] 00:06:50.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.398 [2024-07-12 15:43:20.007087] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.398 [2024-07-12 15:43:20.116311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.657 [2024-07-12 15:43:20.172667] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:50.657 [2024-07-12 15:43:20.250974] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:50.657 00:06:50.657 Compression does not support the verify option, aborting. 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.657 00:06:50.657 real 0m0.438s 00:06:50.657 user 0m0.331s 00:06:50.657 sys 0m0.141s 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.657 15:43:20 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:50.657 ************************************ 00:06:50.657 END TEST accel_compress_verify 00:06:50.657 ************************************ 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.915 15:43:20 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.915 ************************************ 00:06:50.915 START TEST accel_wrong_workload 00:06:50.915 ************************************ 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:50.915 15:43:20 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:50.915 Unsupported workload type: foobar 00:06:50.915 [2024-07-12 15:43:20.431738] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:50.915 accel_perf options: 00:06:50.915 [-h help message] 00:06:50.915 [-q queue depth per core] 00:06:50.915 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.915 [-T number of threads per core 00:06:50.915 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.915 [-t time in seconds] 00:06:50.915 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.915 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:50.915 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.915 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.915 [-S for crc32c workload, use this seed value (default 0) 00:06:50.915 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.915 [-f for fill workload, use this BYTE value (default 255) 00:06:50.915 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.915 [-y verify result if this switch is on] 00:06:50.915 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.915 Can be used to spread operations across a wider range of memory. 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.915 00:06:50.915 real 0m0.024s 00:06:50.915 user 0m0.012s 00:06:50.915 sys 0m0.012s 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.915 15:43:20 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:50.915 ************************************ 00:06:50.915 END TEST accel_wrong_workload 00:06:50.915 ************************************ 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.915 15:43:20 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.915 15:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.915 Error: writing output failed: Broken pipe 00:06:50.916 ************************************ 00:06:50.916 START TEST accel_negative_buffers 00:06:50.916 ************************************ 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:50.916 15:43:20 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:50.916 -x option must be non-negative. 00:06:50.916 [2024-07-12 15:43:20.495448] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:50.916 accel_perf options: 00:06:50.916 [-h help message] 00:06:50.916 [-q queue depth per core] 00:06:50.916 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:50.916 [-T number of threads per core 00:06:50.916 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:50.916 [-t time in seconds] 00:06:50.916 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:50.916 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:50.916 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:50.916 [-l for compress/decompress workloads, name of uncompressed input file 00:06:50.916 [-S for crc32c workload, use this seed value (default 0) 00:06:50.916 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:50.916 [-f for fill workload, use this BYTE value (default 255) 00:06:50.916 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:50.916 [-y verify result if this switch is on] 00:06:50.916 [-a tasks to allocate per core (default: same value as -q)] 00:06:50.916 Can be used to spread operations across a wider range of memory. 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.916 00:06:50.916 real 0m0.024s 00:06:50.916 user 0m0.013s 00:06:50.916 sys 0m0.011s 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.916 15:43:20 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:50.916 ************************************ 00:06:50.916 END TEST accel_negative_buffers 00:06:50.916 ************************************ 00:06:50.916 Error: writing output failed: Broken pipe 00:06:50.916 15:43:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.916 15:43:20 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:50.916 15:43:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:50.916 15:43:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.916 15:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.916 ************************************ 00:06:50.916 START TEST accel_crc32c 00:06:50.916 ************************************ 00:06:50.916 15:43:20 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:50.916 15:43:20 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:50.916 [2024-07-12 15:43:20.563078] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:50.916 [2024-07-12 15:43:20.563143] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110442 ] 00:06:50.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.916 [2024-07-12 15:43:20.619484] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.174 [2024-07-12 15:43:20.725721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.174 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.175 15:43:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:52.602 15:43:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.602 00:06:52.602 real 0m1.440s 00:06:52.602 user 0m1.309s 00:06:52.602 sys 0m0.134s 00:06:52.602 15:43:21 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.602 15:43:21 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:52.602 ************************************ 00:06:52.602 END TEST accel_crc32c 00:06:52.602 ************************************ 00:06:52.602 15:43:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.602 15:43:22 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:52.602 15:43:22 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:52.602 15:43:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.602 15:43:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.602 ************************************ 00:06:52.602 START TEST accel_crc32c_C2 00:06:52.602 ************************************ 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:52.602 [2024-07-12 15:43:22.052778] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:52.602 [2024-07-12 15:43:22.052840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110593 ] 00:06:52.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.602 [2024-07-12 15:43:22.109037] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.602 [2024-07-12 15:43:22.214750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.602 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.603 15:43:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.972 00:06:53.972 real 0m1.436s 00:06:53.972 user 0m1.308s 00:06:53.972 sys 0m0.130s 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.972 15:43:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:53.972 ************************************ 00:06:53.972 END TEST accel_crc32c_C2 00:06:53.972 ************************************ 00:06:53.972 15:43:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.972 15:43:23 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:53.972 15:43:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:53.972 15:43:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.972 15:43:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.972 ************************************ 00:06:53.972 START TEST accel_copy 00:06:53.972 ************************************ 00:06:53.972 15:43:23 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:53.972 15:43:23 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:53.972 [2024-07-12 15:43:23.537225] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:53.972 [2024-07-12 15:43:23.537289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4110870 ] 00:06:53.972 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.972 [2024-07-12 15:43:23.593121] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.972 [2024-07-12 15:43:23.698107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.229 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.230 15:43:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:55.601 15:43:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.601 00:06:55.601 real 0m1.427s 00:06:55.601 user 0m1.289s 00:06:55.601 sys 0m0.140s 00:06:55.601 15:43:24 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.601 15:43:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:55.601 ************************************ 00:06:55.601 END TEST accel_copy 00:06:55.601 ************************************ 00:06:55.601 15:43:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.601 15:43:24 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.601 15:43:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:55.601 15:43:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.601 15:43:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.601 ************************************ 00:06:55.601 START TEST accel_fill 00:06:55.601 ************************************ 00:06:55.601 15:43:24 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:55.601 15:43:24 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:55.601 [2024-07-12 15:43:25.005786] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:55.601 [2024-07-12 15:43:25.005836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111030 ] 00:06:55.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.601 [2024-07-12 15:43:25.060404] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.601 [2024-07-12 15:43:25.165310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.601 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.602 15:43:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:56.974 15:43:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.974 00:06:56.974 real 0m1.433s 00:06:56.974 user 0m1.297s 00:06:56.974 sys 0m0.139s 00:06:56.974 15:43:26 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.974 15:43:26 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:56.974 ************************************ 00:06:56.974 END TEST accel_fill 00:06:56.974 ************************************ 00:06:56.974 15:43:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.974 15:43:26 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:56.974 15:43:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.974 15:43:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.974 15:43:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.974 ************************************ 00:06:56.974 START TEST accel_copy_crc32c 00:06:56.974 ************************************ 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:56.974 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:56.974 [2024-07-12 15:43:26.491122] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:56.974 [2024-07-12 15:43:26.491186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111190 ] 00:06:56.974 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.974 [2024-07-12 15:43:26.546402] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.974 [2024-07-12 15:43:26.651653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:57.232 15:43:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.605 00:06:58.605 real 0m1.438s 00:06:58.605 user 0m1.309s 00:06:58.605 sys 0m0.132s 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.605 15:43:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:58.605 ************************************ 00:06:58.605 END TEST accel_copy_crc32c 00:06:58.605 ************************************ 00:06:58.605 15:43:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.605 15:43:27 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.605 15:43:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.605 15:43:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.605 15:43:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.605 ************************************ 00:06:58.605 START TEST accel_copy_crc32c_C2 00:06:58.605 ************************************ 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.605 15:43:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.605 [2024-07-12 15:43:27.978982] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:58.605 [2024-07-12 15:43:27.979046] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111361 ] 00:06:58.605 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.605 [2024-07-12 15:43:28.035895] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.605 [2024-07-12 15:43:28.140377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.605 15:43:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.976 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.976 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.976 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.977 00:06:59.977 real 0m1.437s 00:06:59.977 user 0m1.298s 00:06:59.977 sys 0m0.141s 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.977 15:43:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:59.977 ************************************ 00:06:59.977 END TEST accel_copy_crc32c_C2 00:06:59.977 ************************************ 00:06:59.977 15:43:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.977 15:43:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:59.977 15:43:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.977 15:43:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.977 15:43:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.977 ************************************ 00:06:59.977 START TEST accel_dualcast 00:06:59.977 ************************************ 00:06:59.977 15:43:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:59.977 [2024-07-12 15:43:29.464247] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:06:59.977 [2024-07-12 15:43:29.464330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111624 ] 00:06:59.977 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.977 [2024-07-12 15:43:29.523132] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.977 [2024-07-12 15:43:29.631559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.977 15:43:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:01.351 15:43:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.351 00:07:01.351 real 0m1.447s 00:07:01.351 user 0m1.305s 00:07:01.351 sys 0m0.143s 00:07:01.351 15:43:30 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.351 15:43:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:01.351 ************************************ 00:07:01.351 END TEST accel_dualcast 00:07:01.351 ************************************ 00:07:01.351 15:43:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.351 15:43:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:01.351 15:43:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.351 15:43:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.351 15:43:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.351 ************************************ 00:07:01.351 START TEST accel_compare 00:07:01.351 ************************************ 00:07:01.351 15:43:30 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:01.351 15:43:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:01.351 [2024-07-12 15:43:30.958366] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:01.351 [2024-07-12 15:43:30.958431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111776 ] 00:07:01.351 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.351 [2024-07-12 15:43:31.017798] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.609 [2024-07-12 15:43:31.124049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.609 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.609 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.609 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.610 15:43:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:02.982 15:43:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.982 00:07:02.982 real 0m1.444s 00:07:02.982 user 0m1.308s 00:07:02.982 sys 0m0.137s 00:07:02.982 15:43:32 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.982 15:43:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:02.982 ************************************ 00:07:02.982 END TEST accel_compare 00:07:02.982 ************************************ 00:07:02.982 15:43:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.982 15:43:32 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:02.982 15:43:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.982 15:43:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.982 15:43:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.982 ************************************ 00:07:02.982 START TEST accel_xor 00:07:02.982 ************************************ 00:07:02.982 15:43:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:02.982 [2024-07-12 15:43:32.451757] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:02.982 [2024-07-12 15:43:32.451820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4111935 ] 00:07:02.982 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.982 [2024-07-12 15:43:32.509724] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.982 [2024-07-12 15:43:32.623185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.982 15:43:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:04.354 15:43:33 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.354 00:07:04.354 real 0m1.450s 00:07:04.354 user 0m1.315s 00:07:04.354 sys 0m0.137s 00:07:04.354 15:43:33 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.354 15:43:33 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:04.354 ************************************ 00:07:04.354 END TEST accel_xor 00:07:04.354 ************************************ 00:07:04.354 15:43:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.355 15:43:33 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:04.355 15:43:33 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:04.355 15:43:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.355 15:43:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.355 ************************************ 00:07:04.355 START TEST accel_xor 00:07:04.355 ************************************ 00:07:04.355 15:43:33 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:04.355 15:43:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:04.355 [2024-07-12 15:43:33.949446] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:04.355 [2024-07-12 15:43:33.949505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112201 ] 00:07:04.355 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.355 [2024-07-12 15:43:34.006149] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.613 [2024-07-12 15:43:34.114631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.613 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:04.614 15:43:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:05.990 15:43:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.990 00:07:05.990 real 0m1.436s 00:07:05.990 user 0m1.313s 00:07:05.990 sys 0m0.124s 00:07:05.990 15:43:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.990 15:43:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:05.990 ************************************ 00:07:05.990 END TEST accel_xor 00:07:05.990 ************************************ 00:07:05.990 15:43:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.990 15:43:35 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:05.990 15:43:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:05.990 15:43:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.990 15:43:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.990 ************************************ 00:07:05.990 START TEST accel_dif_verify 00:07:05.990 ************************************ 00:07:05.990 15:43:35 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:05.990 [2024-07-12 15:43:35.429981] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:05.990 [2024-07-12 15:43:35.430031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112370 ] 00:07:05.990 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.990 [2024-07-12 15:43:35.485493] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.990 [2024-07-12 15:43:35.590203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:05.990 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.991 15:43:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:07.367 15:43:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.367 00:07:07.367 real 0m1.436s 00:07:07.367 user 0m1.300s 00:07:07.367 sys 0m0.140s 00:07:07.367 15:43:36 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.367 15:43:36 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.367 ************************************ 00:07:07.367 END TEST accel_dif_verify 00:07:07.367 ************************************ 00:07:07.367 15:43:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.367 15:43:36 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:07.367 15:43:36 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:07.367 15:43:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.367 15:43:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.367 ************************************ 00:07:07.367 START TEST accel_dif_generate 00:07:07.367 ************************************ 00:07:07.367 15:43:36 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:07.367 15:43:36 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:07.367 [2024-07-12 15:43:36.917749] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:07.367 [2024-07-12 15:43:36.917812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112521 ] 00:07:07.367 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.367 [2024-07-12 15:43:36.974017] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.367 [2024-07-12 15:43:37.080448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:07.625 15:43:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.033 15:43:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:09.034 15:43:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.034 00:07:09.034 real 0m1.436s 00:07:09.034 user 0m1.302s 00:07:09.034 sys 0m0.136s 00:07:09.034 15:43:38 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.034 15:43:38 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:09.034 ************************************ 00:07:09.034 END TEST accel_dif_generate 00:07:09.034 ************************************ 00:07:09.034 15:43:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.034 15:43:38 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:09.034 15:43:38 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:09.034 15:43:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.034 15:43:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.034 ************************************ 00:07:09.034 START TEST accel_dif_generate_copy 00:07:09.034 ************************************ 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:09.034 [2024-07-12 15:43:38.394158] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:09.034 [2024-07-12 15:43:38.394211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112795 ] 00:07:09.034 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.034 [2024-07-12 15:43:38.450237] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.034 [2024-07-12 15:43:38.554872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:09.034 15:43:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.401 00:07:10.401 real 0m1.431s 00:07:10.401 user 0m1.297s 00:07:10.401 sys 0m0.136s 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.401 15:43:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 ************************************ 00:07:10.401 END TEST accel_dif_generate_copy 00:07:10.401 ************************************ 00:07:10.401 15:43:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.401 15:43:39 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:10.401 15:43:39 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.401 15:43:39 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:10.401 15:43:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.401 15:43:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.401 ************************************ 00:07:10.401 START TEST accel_comp 00:07:10.401 ************************************ 00:07:10.401 15:43:39 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:10.402 15:43:39 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:10.402 [2024-07-12 15:43:39.870393] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:10.402 [2024-07-12 15:43:39.870456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112958 ] 00:07:10.402 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.402 [2024-07-12 15:43:39.927696] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.402 [2024-07-12 15:43:40.039409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:10.402 15:43:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:11.773 15:43:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.773 00:07:11.773 real 0m1.451s 00:07:11.773 user 0m1.315s 00:07:11.773 sys 0m0.139s 00:07:11.773 15:43:41 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.773 15:43:41 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:11.773 ************************************ 00:07:11.773 END TEST accel_comp 00:07:11.773 ************************************ 00:07:11.773 15:43:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.773 15:43:41 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.773 15:43:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:11.773 15:43:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.773 15:43:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.773 ************************************ 00:07:11.773 START TEST accel_decomp 00:07:11.773 ************************************ 00:07:11.773 15:43:41 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.773 15:43:41 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.774 15:43:41 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.774 15:43:41 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.774 15:43:41 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:11.774 15:43:41 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:11.774 [2024-07-12 15:43:41.371125] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:11.774 [2024-07-12 15:43:41.371186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113118 ] 00:07:11.774 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.774 [2024-07-12 15:43:41.426886] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.031 [2024-07-12 15:43:41.532170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:12.031 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:12.032 15:43:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.405 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.406 15:43:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.406 00:07:13.406 real 0m1.439s 00:07:13.406 user 0m1.307s 00:07:13.406 sys 0m0.135s 00:07:13.406 15:43:42 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.406 15:43:42 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:13.406 ************************************ 00:07:13.406 END TEST accel_decomp 00:07:13.406 ************************************ 00:07:13.406 15:43:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.406 15:43:42 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.406 15:43:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:13.406 15:43:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.406 15:43:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.406 ************************************ 00:07:13.406 START TEST accel_decomp_full 00:07:13.406 ************************************ 00:07:13.406 15:43:42 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:13.406 15:43:42 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:13.406 [2024-07-12 15:43:42.858398] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:13.406 [2024-07-12 15:43:42.858464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113357 ] 00:07:13.406 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.406 [2024-07-12 15:43:42.916905] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.406 [2024-07-12 15:43:43.020816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:13.406 15:43:43 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:14.778 15:43:44 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.778 00:07:14.778 real 0m1.447s 00:07:14.778 user 0m1.307s 00:07:14.778 sys 0m0.142s 00:07:14.778 15:43:44 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.778 15:43:44 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:14.778 ************************************ 00:07:14.778 END TEST accel_decomp_full 00:07:14.778 ************************************ 00:07:14.778 15:43:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.778 15:43:44 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.778 15:43:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:14.778 15:43:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.778 15:43:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.778 ************************************ 00:07:14.778 START TEST accel_decomp_mcore 00:07:14.778 ************************************ 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:14.778 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:14.778 [2024-07-12 15:43:44.352356] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:14.778 [2024-07-12 15:43:44.352417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113541 ] 00:07:14.778 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.778 [2024-07-12 15:43:44.411257] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.036 [2024-07-12 15:43:44.517713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.036 [2024-07-12 15:43:44.517779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.036 [2024-07-12 15:43:44.517842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.036 [2024-07-12 15:43:44.517844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.036 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.037 15:43:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.407 00:07:16.407 real 0m1.443s 00:07:16.407 user 0m4.727s 00:07:16.407 sys 0m0.137s 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.407 15:43:45 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 ************************************ 00:07:16.407 END TEST accel_decomp_mcore 00:07:16.407 ************************************ 00:07:16.407 15:43:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.407 15:43:45 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.407 15:43:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:16.407 15:43:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.407 15:43:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 ************************************ 00:07:16.407 START TEST accel_decomp_full_mcore 00:07:16.407 ************************************ 00:07:16.407 15:43:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.407 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:16.407 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:16.408 15:43:45 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:16.408 [2024-07-12 15:43:45.842968] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:16.408 [2024-07-12 15:43:45.843030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113707 ] 00:07:16.408 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.408 [2024-07-12 15:43:45.899489] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.408 [2024-07-12 15:43:46.006225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.408 [2024-07-12 15:43:46.006287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.408 [2024-07-12 15:43:46.006355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.408 [2024-07-12 15:43:46.006358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:16.408 15:43:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.780 00:07:17.780 real 0m1.467s 00:07:17.780 user 0m4.818s 00:07:17.780 sys 0m0.152s 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.780 15:43:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:17.780 ************************************ 00:07:17.780 END TEST accel_decomp_full_mcore 00:07:17.780 ************************************ 00:07:17.781 15:43:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.781 15:43:47 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.781 15:43:47 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:17.781 15:43:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.781 15:43:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.781 ************************************ 00:07:17.781 START TEST accel_decomp_mthread 00:07:17.781 ************************************ 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:17.781 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:17.781 [2024-07-12 15:43:47.356074] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:17.781 [2024-07-12 15:43:47.356136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4113909 ] 00:07:17.781 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.781 [2024-07-12 15:43:47.412781] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.039 [2024-07-12 15:43:47.520815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:18.039 15:43:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.413 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.414 00:07:19.414 real 0m1.446s 00:07:19.414 user 0m1.320s 00:07:19.414 sys 0m0.129s 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.414 15:43:48 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:19.414 ************************************ 00:07:19.414 END TEST accel_decomp_mthread 00:07:19.414 ************************************ 00:07:19.414 15:43:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.414 15:43:48 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.414 15:43:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:19.414 15:43:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.414 15:43:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.414 ************************************ 00:07:19.414 START TEST accel_decomp_full_mthread 00:07:19.414 ************************************ 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:19.414 15:43:48 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:19.414 [2024-07-12 15:43:48.850438] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:19.414 [2024-07-12 15:43:48.850500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114135 ] 00:07:19.414 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.414 [2024-07-12 15:43:48.906842] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.414 [2024-07-12 15:43:49.011438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:19.414 15:43:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:20.786 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:20.787 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:20.787 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:20.787 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.787 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:20.787 15:43:50 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.787 00:07:20.787 real 0m1.468s 00:07:20.787 user 0m1.334s 00:07:20.787 sys 0m0.135s 00:07:20.787 15:43:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.787 15:43:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:20.787 ************************************ 00:07:20.787 END TEST accel_decomp_full_mthread 00:07:20.787 ************************************ 00:07:20.787 15:43:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.787 15:43:50 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:20.787 15:43:50 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.787 15:43:50 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:20.787 15:43:50 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:20.787 15:43:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.787 15:43:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.787 15:43:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.787 15:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.787 15:43:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.787 15:43:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.787 15:43:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.787 15:43:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:20.787 15:43:50 accel -- accel/accel.sh@41 -- # jq -r . 00:07:20.787 ************************************ 00:07:20.787 START TEST accel_dif_functional_tests 00:07:20.787 ************************************ 00:07:20.787 15:43:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:20.787 [2024-07-12 15:43:50.386804] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:20.787 [2024-07-12 15:43:50.386862] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114298 ] 00:07:20.787 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.787 [2024-07-12 15:43:50.441587] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.045 [2024-07-12 15:43:50.550664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.045 [2024-07-12 15:43:50.550729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.045 [2024-07-12 15:43:50.550733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.045 00:07:21.045 00:07:21.045 CUnit - A unit testing framework for C - Version 2.1-3 00:07:21.045 http://cunit.sourceforge.net/ 00:07:21.045 00:07:21.045 00:07:21.045 Suite: accel_dif 00:07:21.045 Test: verify: DIF generated, GUARD check ...passed 00:07:21.045 Test: verify: DIF generated, APPTAG check ...passed 00:07:21.045 Test: verify: DIF generated, REFTAG check ...passed 00:07:21.045 Test: verify: DIF not generated, GUARD check ...[2024-07-12 15:43:50.648090] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.045 passed 00:07:21.045 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 15:43:50.648157] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.045 passed 00:07:21.045 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 15:43:50.648189] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.045 passed 00:07:21.045 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:21.045 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 15:43:50.648258] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:21.045 passed 00:07:21.045 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:21.045 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:21.045 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:21.045 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 15:43:50.648415] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:21.045 passed 00:07:21.045 Test: verify copy: DIF generated, GUARD check ...passed 00:07:21.045 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:21.045 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:21.045 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 15:43:50.648576] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:21.045 passed 00:07:21.045 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 15:43:50.648613] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:21.045 passed 00:07:21.045 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 15:43:50.648668] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:21.045 passed 00:07:21.045 Test: generate copy: DIF generated, GUARD check ...passed 00:07:21.045 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:21.045 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:21.045 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:21.045 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:21.045 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:21.045 Test: generate copy: iovecs-len validate ...[2024-07-12 15:43:50.648888] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:21.045 passed 00:07:21.045 Test: generate copy: buffer alignment validate ...passed 00:07:21.045 00:07:21.045 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.045 suites 1 1 n/a 0 0 00:07:21.045 tests 26 26 26 0 0 00:07:21.045 asserts 115 115 115 0 n/a 00:07:21.045 00:07:21.045 Elapsed time = 0.003 seconds 00:07:21.302 00:07:21.302 real 0m0.544s 00:07:21.302 user 0m0.839s 00:07:21.302 sys 0m0.174s 00:07:21.302 15:43:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.302 15:43:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:21.302 ************************************ 00:07:21.302 END TEST accel_dif_functional_tests 00:07:21.302 ************************************ 00:07:21.302 15:43:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.302 00:07:21.302 real 0m32.592s 00:07:21.302 user 0m36.172s 00:07:21.302 sys 0m4.408s 00:07:21.302 15:43:50 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.302 15:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.302 ************************************ 00:07:21.302 END TEST accel 00:07:21.302 ************************************ 00:07:21.302 15:43:50 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.302 15:43:50 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:21.302 15:43:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.302 15:43:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.302 15:43:50 -- common/autotest_common.sh@10 -- # set +x 00:07:21.302 ************************************ 00:07:21.302 START TEST accel_rpc 00:07:21.302 ************************************ 00:07:21.302 15:43:50 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:21.302 * Looking for test storage... 00:07:21.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:21.302 15:43:51 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:21.302 15:43:51 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4114481 00:07:21.303 15:43:51 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:21.303 15:43:51 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4114481 00:07:21.303 15:43:51 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 4114481 ']' 00:07:21.303 15:43:51 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.303 15:43:51 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.303 15:43:51 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.303 15:43:51 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.303 15:43:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.560 [2024-07-12 15:43:51.069706] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:21.560 [2024-07-12 15:43:51.069784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114481 ] 00:07:21.560 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.560 [2024-07-12 15:43:51.125911] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.560 [2024-07-12 15:43:51.232678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.560 15:43:51 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.560 15:43:51 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:21.560 15:43:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:21.560 15:43:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:21.560 15:43:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:21.560 15:43:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:21.560 15:43:51 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:21.560 15:43:51 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.560 15:43:51 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.560 15:43:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.560 ************************************ 00:07:21.560 START TEST accel_assign_opcode 00:07:21.560 ************************************ 00:07:21.560 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:21.560 15:43:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:21.816 [2024-07-12 15:43:51.293222] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:21.816 [2024-07-12 15:43:51.301232] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.816 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.073 software 00:07:22.073 00:07:22.073 real 0m0.298s 00:07:22.073 user 0m0.039s 00:07:22.073 sys 0m0.007s 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.073 15:43:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:22.073 ************************************ 00:07:22.073 END TEST accel_assign_opcode 00:07:22.073 ************************************ 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:22.073 15:43:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4114481 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 4114481 ']' 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 4114481 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4114481 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4114481' 00:07:22.073 killing process with pid 4114481 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@967 -- # kill 4114481 00:07:22.073 15:43:51 accel_rpc -- common/autotest_common.sh@972 -- # wait 4114481 00:07:22.638 00:07:22.638 real 0m1.114s 00:07:22.638 user 0m1.041s 00:07:22.638 sys 0m0.418s 00:07:22.638 15:43:52 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.638 15:43:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.638 ************************************ 00:07:22.638 END TEST accel_rpc 00:07:22.638 ************************************ 00:07:22.638 15:43:52 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.638 15:43:52 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:22.638 15:43:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.638 15:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.638 15:43:52 -- common/autotest_common.sh@10 -- # set +x 00:07:22.638 ************************************ 00:07:22.638 START TEST app_cmdline 00:07:22.638 ************************************ 00:07:22.638 15:43:52 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:22.638 * Looking for test storage... 00:07:22.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.638 15:43:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:22.638 15:43:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4114685 00:07:22.638 15:43:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:22.638 15:43:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4114685 00:07:22.638 15:43:52 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 4114685 ']' 00:07:22.638 15:43:52 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.638 15:43:52 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.638 15:43:52 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.638 15:43:52 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.638 15:43:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:22.638 [2024-07-12 15:43:52.234031] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:22.638 [2024-07-12 15:43:52.234107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4114685 ] 00:07:22.638 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.638 [2024-07-12 15:43:52.290723] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.894 [2024-07-12 15:43:52.399156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.152 15:43:52 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.152 15:43:52 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:23.152 15:43:52 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:23.152 { 00:07:23.152 "version": "SPDK v24.09-pre git sha1 26acb15a6", 00:07:23.152 "fields": { 00:07:23.152 "major": 24, 00:07:23.152 "minor": 9, 00:07:23.152 "patch": 0, 00:07:23.152 "suffix": "-pre", 00:07:23.152 "commit": "26acb15a6" 00:07:23.152 } 00:07:23.152 } 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:23.409 15:43:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:23.409 15:43:52 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.665 request: 00:07:23.665 { 00:07:23.665 "method": "env_dpdk_get_mem_stats", 00:07:23.665 "req_id": 1 00:07:23.665 } 00:07:23.665 Got JSON-RPC error response 00:07:23.665 response: 00:07:23.665 { 00:07:23.665 "code": -32601, 00:07:23.665 "message": "Method not found" 00:07:23.665 } 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.665 15:43:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4114685 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 4114685 ']' 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 4114685 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4114685 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4114685' 00:07:23.665 killing process with pid 4114685 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@967 -- # kill 4114685 00:07:23.665 15:43:53 app_cmdline -- common/autotest_common.sh@972 -- # wait 4114685 00:07:24.267 00:07:24.267 real 0m1.563s 00:07:24.267 user 0m1.921s 00:07:24.267 sys 0m0.472s 00:07:24.267 15:43:53 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.267 15:43:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:24.267 ************************************ 00:07:24.267 END TEST app_cmdline 00:07:24.267 ************************************ 00:07:24.267 15:43:53 -- common/autotest_common.sh@1142 -- # return 0 00:07:24.267 15:43:53 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:24.267 15:43:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.267 15:43:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.267 15:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:24.267 ************************************ 00:07:24.267 START TEST version 00:07:24.267 ************************************ 00:07:24.267 15:43:53 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:24.267 * Looking for test storage... 00:07:24.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:24.267 15:43:53 version -- app/version.sh@17 -- # get_header_version major 00:07:24.267 15:43:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # cut -f2 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.267 15:43:53 version -- app/version.sh@17 -- # major=24 00:07:24.267 15:43:53 version -- app/version.sh@18 -- # get_header_version minor 00:07:24.267 15:43:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # cut -f2 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.267 15:43:53 version -- app/version.sh@18 -- # minor=9 00:07:24.267 15:43:53 version -- app/version.sh@19 -- # get_header_version patch 00:07:24.267 15:43:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # cut -f2 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.267 15:43:53 version -- app/version.sh@19 -- # patch=0 00:07:24.267 15:43:53 version -- app/version.sh@20 -- # get_header_version suffix 00:07:24.267 15:43:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # cut -f2 00:07:24.267 15:43:53 version -- app/version.sh@14 -- # tr -d '"' 00:07:24.267 15:43:53 version -- app/version.sh@20 -- # suffix=-pre 00:07:24.267 15:43:53 version -- app/version.sh@22 -- # version=24.9 00:07:24.267 15:43:53 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:24.267 15:43:53 version -- app/version.sh@28 -- # version=24.9rc0 00:07:24.267 15:43:53 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:24.267 15:43:53 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:24.267 15:43:53 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:24.267 15:43:53 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:24.267 00:07:24.267 real 0m0.114s 00:07:24.267 user 0m0.061s 00:07:24.267 sys 0m0.076s 00:07:24.267 15:43:53 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.267 15:43:53 version -- common/autotest_common.sh@10 -- # set +x 00:07:24.267 ************************************ 00:07:24.267 END TEST version 00:07:24.267 ************************************ 00:07:24.267 15:43:53 -- common/autotest_common.sh@1142 -- # return 0 00:07:24.267 15:43:53 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@198 -- # uname -s 00:07:24.267 15:43:53 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:24.267 15:43:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:24.267 15:43:53 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:24.267 15:43:53 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:24.267 15:43:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:24.267 15:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:24.267 15:43:53 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:24.267 15:43:53 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:24.267 15:43:53 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.267 15:43:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.267 15:43:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.267 15:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:24.267 ************************************ 00:07:24.267 START TEST nvmf_tcp 00:07:24.268 ************************************ 00:07:24.268 15:43:53 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:24.268 * Looking for test storage... 00:07:24.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.268 15:43:53 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.268 15:43:53 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.268 15:43:53 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.268 15:43:53 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.268 15:43:53 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.268 15:43:53 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.268 15:43:53 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:24.268 15:43:53 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:24.268 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:24.526 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:24.526 15:43:53 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.526 15:43:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.526 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:24.526 15:43:53 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:24.526 15:43:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:24.526 15:43:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.526 15:43:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:24.526 ************************************ 00:07:24.526 START TEST nvmf_example 00:07:24.526 ************************************ 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:24.527 * Looking for test storage... 00:07:24.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:24.527 15:43:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:26.430 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:26.430 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:26.430 Found net devices under 0000:09:00.0: cvl_0_0 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:26.430 Found net devices under 0000:09:00.1: cvl_0_1 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:26.430 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:26.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:07:26.689 00:07:26.689 --- 10.0.0.2 ping statistics --- 00:07:26.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.689 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:26.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:07:26.689 00:07:26.689 --- 10.0.0.1 ping statistics --- 00:07:26.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.689 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4116598 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4116598 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 4116598 ']' 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.689 15:43:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:26.689 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.621 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.621 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:27.621 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:27.621 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:27.621 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:27.878 15:43:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:27.878 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.084 Initializing NVMe Controllers 00:07:40.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:40.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:40.084 Initialization complete. Launching workers. 00:07:40.084 ======================================================== 00:07:40.084 Latency(us) 00:07:40.084 Device Information : IOPS MiB/s Average min max 00:07:40.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15068.90 58.86 4249.44 810.45 15184.25 00:07:40.084 ======================================================== 00:07:40.084 Total : 15068.90 58.86 4249.44 810.45 15184.25 00:07:40.084 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:40.084 rmmod nvme_tcp 00:07:40.084 rmmod nvme_fabrics 00:07:40.084 rmmod nvme_keyring 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4116598 ']' 00:07:40.084 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4116598 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 4116598 ']' 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 4116598 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4116598 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4116598' 00:07:40.085 killing process with pid 4116598 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 4116598 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 4116598 00:07:40.085 nvmf threads initialize successfully 00:07:40.085 bdev subsystem init successfully 00:07:40.085 created a nvmf target service 00:07:40.085 create targets's poll groups done 00:07:40.085 all subsystems of target started 00:07:40.085 nvmf target is running 00:07:40.085 all subsystems of target stopped 00:07:40.085 destroy targets's poll groups done 00:07:40.085 destroyed the nvmf target service 00:07:40.085 bdev subsystem finish successfully 00:07:40.085 nvmf threads destroy successfully 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.085 15:44:07 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.344 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.344 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:40.344 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.344 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.344 00:07:40.344 real 0m16.064s 00:07:40.344 user 0m45.521s 00:07:40.344 sys 0m3.345s 00:07:40.344 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.344 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.344 ************************************ 00:07:40.345 END TEST nvmf_example 00:07:40.345 ************************************ 00:07:40.606 15:44:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:40.606 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:40.606 15:44:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.606 15:44:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.606 15:44:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.606 ************************************ 00:07:40.606 START TEST nvmf_filesystem 00:07:40.607 ************************************ 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:40.607 * Looking for test storage... 00:07:40.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:40.607 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:40.608 #define SPDK_CONFIG_H 00:07:40.608 #define SPDK_CONFIG_APPS 1 00:07:40.608 #define SPDK_CONFIG_ARCH native 00:07:40.608 #undef SPDK_CONFIG_ASAN 00:07:40.608 #undef SPDK_CONFIG_AVAHI 00:07:40.608 #undef SPDK_CONFIG_CET 00:07:40.608 #define SPDK_CONFIG_COVERAGE 1 00:07:40.608 #define SPDK_CONFIG_CROSS_PREFIX 00:07:40.608 #undef SPDK_CONFIG_CRYPTO 00:07:40.608 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:40.608 #undef SPDK_CONFIG_CUSTOMOCF 00:07:40.608 #undef SPDK_CONFIG_DAOS 00:07:40.608 #define SPDK_CONFIG_DAOS_DIR 00:07:40.608 #define SPDK_CONFIG_DEBUG 1 00:07:40.608 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:40.608 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:40.608 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:40.608 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:40.608 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:40.608 #undef SPDK_CONFIG_DPDK_UADK 00:07:40.608 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:40.608 #define SPDK_CONFIG_EXAMPLES 1 00:07:40.608 #undef SPDK_CONFIG_FC 00:07:40.608 #define SPDK_CONFIG_FC_PATH 00:07:40.608 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:40.608 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:40.608 #undef SPDK_CONFIG_FUSE 00:07:40.608 #undef SPDK_CONFIG_FUZZER 00:07:40.608 #define SPDK_CONFIG_FUZZER_LIB 00:07:40.608 #undef SPDK_CONFIG_GOLANG 00:07:40.608 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:40.608 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:40.608 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:40.608 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:40.608 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:40.608 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:40.608 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:40.608 #define SPDK_CONFIG_IDXD 1 00:07:40.608 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:40.608 #undef SPDK_CONFIG_IPSEC_MB 00:07:40.608 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:40.608 #define SPDK_CONFIG_ISAL 1 00:07:40.608 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:40.608 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:40.608 #define SPDK_CONFIG_LIBDIR 00:07:40.608 #undef SPDK_CONFIG_LTO 00:07:40.608 #define SPDK_CONFIG_MAX_LCORES 128 00:07:40.608 #define SPDK_CONFIG_NVME_CUSE 1 00:07:40.608 #undef SPDK_CONFIG_OCF 00:07:40.608 #define SPDK_CONFIG_OCF_PATH 00:07:40.608 #define SPDK_CONFIG_OPENSSL_PATH 00:07:40.608 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:40.608 #define SPDK_CONFIG_PGO_DIR 00:07:40.608 #undef SPDK_CONFIG_PGO_USE 00:07:40.608 #define SPDK_CONFIG_PREFIX /usr/local 00:07:40.608 #undef SPDK_CONFIG_RAID5F 00:07:40.608 #undef SPDK_CONFIG_RBD 00:07:40.608 #define SPDK_CONFIG_RDMA 1 00:07:40.608 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:40.608 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:40.608 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:40.608 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:40.608 #define SPDK_CONFIG_SHARED 1 00:07:40.608 #undef SPDK_CONFIG_SMA 00:07:40.608 #define SPDK_CONFIG_TESTS 1 00:07:40.608 #undef SPDK_CONFIG_TSAN 00:07:40.608 #define SPDK_CONFIG_UBLK 1 00:07:40.608 #define SPDK_CONFIG_UBSAN 1 00:07:40.608 #undef SPDK_CONFIG_UNIT_TESTS 00:07:40.608 #undef SPDK_CONFIG_URING 00:07:40.608 #define SPDK_CONFIG_URING_PATH 00:07:40.608 #undef SPDK_CONFIG_URING_ZNS 00:07:40.608 #undef SPDK_CONFIG_USDT 00:07:40.608 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:40.608 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:40.608 #define SPDK_CONFIG_VFIO_USER 1 00:07:40.608 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:40.608 #define SPDK_CONFIG_VHOST 1 00:07:40.608 #define SPDK_CONFIG_VIRTIO 1 00:07:40.608 #undef SPDK_CONFIG_VTUNE 00:07:40.608 #define SPDK_CONFIG_VTUNE_DIR 00:07:40.608 #define SPDK_CONFIG_WERROR 1 00:07:40.608 #define SPDK_CONFIG_WPDK_DIR 00:07:40.608 #undef SPDK_CONFIG_XNVME 00:07:40.608 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:40.608 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:40.609 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:40.610 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 4118420 ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 4118420 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.eYMXzc 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.eYMXzc/tests/target /tmp/spdk.eYMXzc 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=51229859840 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994737664 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10764877824 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941732864 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997368832 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8765440 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996299776 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997368832 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1069056 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:40.611 * Looking for test storage... 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=51229859840 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12979470336 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.611 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.612 15:44:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:43.145 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:43.146 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:43.146 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:43.146 Found net devices under 0000:09:00.0: cvl_0_0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:43.146 Found net devices under 0000:09:00.1: cvl_0_1 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:43.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:43.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:43.146 00:07:43.146 --- 10.0.0.2 ping statistics --- 00:07:43.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.146 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:43.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:43.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:07:43.146 00:07:43.146 --- 10.0.0.1 ping statistics --- 00:07:43.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:43.146 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.146 ************************************ 00:07:43.146 START TEST nvmf_filesystem_no_in_capsule 00:07:43.146 ************************************ 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4120046 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4120046 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4120046 ']' 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.146 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.146 [2024-07-12 15:44:12.643008] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:43.146 [2024-07-12 15:44:12.643083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.146 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.146 [2024-07-12 15:44:12.706903] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.146 [2024-07-12 15:44:12.817492] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.146 [2024-07-12 15:44:12.817541] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.146 [2024-07-12 15:44:12.817555] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.146 [2024-07-12 15:44:12.817566] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.146 [2024-07-12 15:44:12.817580] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.146 [2024-07-12 15:44:12.817635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.146 [2024-07-12 15:44:12.817693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.146 [2024-07-12 15:44:12.817769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.146 [2024-07-12 15:44:12.817772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.403 [2024-07-12 15:44:12.971158] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.403 15:44:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.661 Malloc1 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.661 [2024-07-12 15:44:13.164979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:43.661 { 00:07:43.661 "name": "Malloc1", 00:07:43.661 "aliases": [ 00:07:43.661 "82d1aae6-5a92-4552-b60c-d6ab55ecb085" 00:07:43.661 ], 00:07:43.661 "product_name": "Malloc disk", 00:07:43.661 "block_size": 512, 00:07:43.661 "num_blocks": 1048576, 00:07:43.661 "uuid": "82d1aae6-5a92-4552-b60c-d6ab55ecb085", 00:07:43.661 "assigned_rate_limits": { 00:07:43.661 "rw_ios_per_sec": 0, 00:07:43.661 "rw_mbytes_per_sec": 0, 00:07:43.661 "r_mbytes_per_sec": 0, 00:07:43.661 "w_mbytes_per_sec": 0 00:07:43.661 }, 00:07:43.661 "claimed": true, 00:07:43.661 "claim_type": "exclusive_write", 00:07:43.661 "zoned": false, 00:07:43.661 "supported_io_types": { 00:07:43.661 "read": true, 00:07:43.661 "write": true, 00:07:43.661 "unmap": true, 00:07:43.661 "flush": true, 00:07:43.661 "reset": true, 00:07:43.661 "nvme_admin": false, 00:07:43.661 "nvme_io": false, 00:07:43.661 "nvme_io_md": false, 00:07:43.661 "write_zeroes": true, 00:07:43.661 "zcopy": true, 00:07:43.661 "get_zone_info": false, 00:07:43.661 "zone_management": false, 00:07:43.661 "zone_append": false, 00:07:43.661 "compare": false, 00:07:43.661 "compare_and_write": false, 00:07:43.661 "abort": true, 00:07:43.661 "seek_hole": false, 00:07:43.661 "seek_data": false, 00:07:43.661 "copy": true, 00:07:43.661 "nvme_iov_md": false 00:07:43.661 }, 00:07:43.661 "memory_domains": [ 00:07:43.661 { 00:07:43.661 "dma_device_id": "system", 00:07:43.661 "dma_device_type": 1 00:07:43.661 }, 00:07:43.661 { 00:07:43.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.661 "dma_device_type": 2 00:07:43.661 } 00:07:43.661 ], 00:07:43.661 "driver_specific": {} 00:07:43.661 } 00:07:43.661 ]' 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.661 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:44.226 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:44.226 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:44.226 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:44.226 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:44.226 15:44:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:46.750 15:44:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:46.751 15:44:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:47.315 15:44:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 ************************************ 00:07:48.733 START TEST filesystem_ext4 00:07:48.733 ************************************ 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:48.733 15:44:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:48.733 mke2fs 1.46.5 (30-Dec-2021) 00:07:48.733 Discarding device blocks: 0/522240 done 00:07:48.733 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:48.733 Filesystem UUID: 66621680-8fcb-43f3-9a8c-fc1e23eea65b 00:07:48.733 Superblock backups stored on blocks: 00:07:48.733 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:48.733 00:07:48.733 Allocating group tables: 0/64 done 00:07:48.733 Writing inode tables: 0/64 done 00:07:51.279 Creating journal (8192 blocks): done 00:07:51.280 Writing superblocks and filesystem accounting information: 0/64 done 00:07:51.280 00:07:51.280 15:44:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:51.280 15:44:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.844 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.844 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:51.844 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.844 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:51.844 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4120046 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.845 00:07:51.845 real 0m3.444s 00:07:51.845 user 0m0.024s 00:07:51.845 sys 0m0.055s 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:51.845 ************************************ 00:07:51.845 END TEST filesystem_ext4 00:07:51.845 ************************************ 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.845 ************************************ 00:07:51.845 START TEST filesystem_btrfs 00:07:51.845 ************************************ 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:51.845 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:52.103 btrfs-progs v6.6.2 00:07:52.103 See https://btrfs.readthedocs.io for more information. 00:07:52.103 00:07:52.103 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:52.103 NOTE: several default settings have changed in version 5.15, please make sure 00:07:52.103 this does not affect your deployments: 00:07:52.103 - DUP for metadata (-m dup) 00:07:52.103 - enabled no-holes (-O no-holes) 00:07:52.103 - enabled free-space-tree (-R free-space-tree) 00:07:52.103 00:07:52.103 Label: (null) 00:07:52.103 UUID: b0fdbe04-fb4a-4e24-8a26-c3fbd25d49f0 00:07:52.103 Node size: 16384 00:07:52.103 Sector size: 4096 00:07:52.103 Filesystem size: 510.00MiB 00:07:52.103 Block group profiles: 00:07:52.103 Data: single 8.00MiB 00:07:52.103 Metadata: DUP 32.00MiB 00:07:52.103 System: DUP 8.00MiB 00:07:52.103 SSD detected: yes 00:07:52.103 Zoned device: no 00:07:52.103 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:52.103 Runtime features: free-space-tree 00:07:52.103 Checksum: crc32c 00:07:52.103 Number of devices: 1 00:07:52.103 Devices: 00:07:52.103 ID SIZE PATH 00:07:52.103 1 510.00MiB /dev/nvme0n1p1 00:07:52.103 00:07:52.103 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:52.103 15:44:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4120046 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.037 00:07:53.037 real 0m0.928s 00:07:53.037 user 0m0.022s 00:07:53.037 sys 0m0.107s 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 ************************************ 00:07:53.037 END TEST filesystem_btrfs 00:07:53.037 ************************************ 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.037 ************************************ 00:07:53.037 START TEST filesystem_xfs 00:07:53.037 ************************************ 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:53.037 15:44:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:53.037 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:53.037 = sectsz=512 attr=2, projid32bit=1 00:07:53.037 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:53.037 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:53.037 data = bsize=4096 blocks=130560, imaxpct=25 00:07:53.037 = sunit=0 swidth=0 blks 00:07:53.037 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:53.037 log =internal log bsize=4096 blocks=16384, version=2 00:07:53.037 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:53.037 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:53.968 Discarding blocks...Done. 00:07:53.968 15:44:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:53.968 15:44:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.865 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4120046 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.122 00:07:56.122 real 0m3.195s 00:07:56.122 user 0m0.028s 00:07:56.122 sys 0m0.051s 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:56.122 ************************************ 00:07:56.122 END TEST filesystem_xfs 00:07:56.122 ************************************ 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:56.122 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4120046 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4120046 ']' 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4120046 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4120046 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4120046' 00:07:56.380 killing process with pid 4120046 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 4120046 00:07:56.380 15:44:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 4120046 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:56.944 00:07:56.944 real 0m13.809s 00:07:56.944 user 0m52.940s 00:07:56.944 sys 0m1.946s 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.944 ************************************ 00:07:56.944 END TEST nvmf_filesystem_no_in_capsule 00:07:56.944 ************************************ 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:56.944 ************************************ 00:07:56.944 START TEST nvmf_filesystem_in_capsule 00:07:56.944 ************************************ 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4121883 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4121883 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4121883 ']' 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.944 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.944 [2024-07-12 15:44:26.506582] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:07:56.944 [2024-07-12 15:44:26.506674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.944 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.944 [2024-07-12 15:44:26.575429] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.200 [2024-07-12 15:44:26.688419] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.201 [2024-07-12 15:44:26.688467] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.201 [2024-07-12 15:44:26.688497] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.201 [2024-07-12 15:44:26.688509] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.201 [2024-07-12 15:44:26.688520] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.201 [2024-07-12 15:44:26.688578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.201 [2024-07-12 15:44:26.688640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.201 [2024-07-12 15:44:26.688677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.201 [2024-07-12 15:44:26.688680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.201 [2024-07-12 15:44:26.848188] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.201 15:44:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.457 Malloc1 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.457 [2024-07-12 15:44:27.026865] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:57.457 { 00:07:57.457 "name": "Malloc1", 00:07:57.457 "aliases": [ 00:07:57.457 "1f59302c-fa4c-4663-962f-115666747541" 00:07:57.457 ], 00:07:57.457 "product_name": "Malloc disk", 00:07:57.457 "block_size": 512, 00:07:57.457 "num_blocks": 1048576, 00:07:57.457 "uuid": "1f59302c-fa4c-4663-962f-115666747541", 00:07:57.457 "assigned_rate_limits": { 00:07:57.457 "rw_ios_per_sec": 0, 00:07:57.457 "rw_mbytes_per_sec": 0, 00:07:57.457 "r_mbytes_per_sec": 0, 00:07:57.457 "w_mbytes_per_sec": 0 00:07:57.457 }, 00:07:57.457 "claimed": true, 00:07:57.457 "claim_type": "exclusive_write", 00:07:57.457 "zoned": false, 00:07:57.457 "supported_io_types": { 00:07:57.457 "read": true, 00:07:57.457 "write": true, 00:07:57.457 "unmap": true, 00:07:57.457 "flush": true, 00:07:57.457 "reset": true, 00:07:57.457 "nvme_admin": false, 00:07:57.457 "nvme_io": false, 00:07:57.457 "nvme_io_md": false, 00:07:57.457 "write_zeroes": true, 00:07:57.457 "zcopy": true, 00:07:57.457 "get_zone_info": false, 00:07:57.457 "zone_management": false, 00:07:57.457 "zone_append": false, 00:07:57.457 "compare": false, 00:07:57.457 "compare_and_write": false, 00:07:57.457 "abort": true, 00:07:57.457 "seek_hole": false, 00:07:57.457 "seek_data": false, 00:07:57.457 "copy": true, 00:07:57.457 "nvme_iov_md": false 00:07:57.457 }, 00:07:57.457 "memory_domains": [ 00:07:57.457 { 00:07:57.457 "dma_device_id": "system", 00:07:57.457 "dma_device_type": 1 00:07:57.457 }, 00:07:57.457 { 00:07:57.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.457 "dma_device_type": 2 00:07:57.457 } 00:07:57.457 ], 00:07:57.457 "driver_specific": {} 00:07:57.457 } 00:07:57.457 ]' 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:57.457 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:58.385 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:58.385 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:58.385 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:58.385 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:58.385 15:44:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:00.280 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:00.537 15:44:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:01.467 15:44:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.397 ************************************ 00:08:02.397 START TEST filesystem_in_capsule_ext4 00:08:02.397 ************************************ 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:02.397 15:44:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:02.397 mke2fs 1.46.5 (30-Dec-2021) 00:08:02.397 Discarding device blocks: 0/522240 done 00:08:02.397 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:02.397 Filesystem UUID: d435585e-8e9d-405e-8e0e-1226dcc74a58 00:08:02.397 Superblock backups stored on blocks: 00:08:02.397 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:02.397 00:08:02.397 Allocating group tables: 0/64 done 00:08:02.397 Writing inode tables: 0/64 done 00:08:02.654 Creating journal (8192 blocks): done 00:08:02.654 Writing superblocks and filesystem accounting information: 0/64 done 00:08:02.654 00:08:02.654 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:02.654 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:03.584 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4121883 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:03.584 00:08:03.584 real 0m1.155s 00:08:03.584 user 0m0.016s 00:08:03.584 sys 0m0.054s 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:03.584 ************************************ 00:08:03.584 END TEST filesystem_in_capsule_ext4 00:08:03.584 ************************************ 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.584 ************************************ 00:08:03.584 START TEST filesystem_in_capsule_btrfs 00:08:03.584 ************************************ 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:03.584 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:03.840 btrfs-progs v6.6.2 00:08:03.840 See https://btrfs.readthedocs.io for more information. 00:08:03.840 00:08:03.840 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:03.840 NOTE: several default settings have changed in version 5.15, please make sure 00:08:03.840 this does not affect your deployments: 00:08:03.840 - DUP for metadata (-m dup) 00:08:03.840 - enabled no-holes (-O no-holes) 00:08:03.840 - enabled free-space-tree (-R free-space-tree) 00:08:03.840 00:08:03.840 Label: (null) 00:08:03.840 UUID: eebc1976-4992-47fe-b3a1-2b0b7752066a 00:08:03.840 Node size: 16384 00:08:03.840 Sector size: 4096 00:08:03.840 Filesystem size: 510.00MiB 00:08:03.840 Block group profiles: 00:08:03.840 Data: single 8.00MiB 00:08:03.840 Metadata: DUP 32.00MiB 00:08:03.840 System: DUP 8.00MiB 00:08:03.840 SSD detected: yes 00:08:03.840 Zoned device: no 00:08:03.840 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:03.840 Runtime features: free-space-tree 00:08:03.840 Checksum: crc32c 00:08:03.840 Number of devices: 1 00:08:03.840 Devices: 00:08:03.840 ID SIZE PATH 00:08:03.840 1 510.00MiB /dev/nvme0n1p1 00:08:03.840 00:08:03.840 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:03.840 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4121883 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:04.098 00:08:04.098 real 0m0.638s 00:08:04.098 user 0m0.022s 00:08:04.098 sys 0m0.124s 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:04.098 ************************************ 00:08:04.098 END TEST filesystem_in_capsule_btrfs 00:08:04.098 ************************************ 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.098 ************************************ 00:08:04.098 START TEST filesystem_in_capsule_xfs 00:08:04.098 ************************************ 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:04.098 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:04.356 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:04.356 = sectsz=512 attr=2, projid32bit=1 00:08:04.356 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:04.356 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:04.356 data = bsize=4096 blocks=130560, imaxpct=25 00:08:04.356 = sunit=0 swidth=0 blks 00:08:04.356 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:04.356 log =internal log bsize=4096 blocks=16384, version=2 00:08:04.356 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:04.356 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:05.347 Discarding blocks...Done. 00:08:05.347 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:05.347 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4121883 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.871 00:08:07.871 real 0m3.609s 00:08:07.871 user 0m0.015s 00:08:07.871 sys 0m0.062s 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:07.871 ************************************ 00:08:07.871 END TEST filesystem_in_capsule_xfs 00:08:07.871 ************************************ 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:07.871 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4121883 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4121883 ']' 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4121883 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4121883 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4121883' 00:08:08.128 killing process with pid 4121883 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 4121883 00:08:08.128 15:44:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 4121883 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:08.694 00:08:08.694 real 0m11.893s 00:08:08.694 user 0m45.377s 00:08:08.694 sys 0m1.834s 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.694 ************************************ 00:08:08.694 END TEST nvmf_filesystem_in_capsule 00:08:08.694 ************************************ 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.694 rmmod nvme_tcp 00:08:08.694 rmmod nvme_fabrics 00:08:08.694 rmmod nvme_keyring 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.694 15:44:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.955 15:44:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.955 15:44:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.860 15:44:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.861 00:08:10.861 real 0m30.358s 00:08:10.861 user 1m39.327s 00:08:10.861 sys 0m5.446s 00:08:10.861 15:44:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.861 15:44:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 ************************************ 00:08:10.861 END TEST nvmf_filesystem 00:08:10.861 ************************************ 00:08:10.861 15:44:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:10.861 15:44:40 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:10.861 15:44:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:10.861 15:44:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.861 15:44:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:10.861 ************************************ 00:08:10.861 START TEST nvmf_target_discovery 00:08:10.861 ************************************ 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:10.861 * Looking for test storage... 00:08:10.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.861 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.119 15:44:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.653 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:13.654 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:13.654 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:13.654 Found net devices under 0000:09:00.0: cvl_0_0 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:13.654 Found net devices under 0000:09:00.1: cvl_0_1 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:08:13.654 00:08:13.654 --- 10.0.0.2 ping statistics --- 00:08:13.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.654 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:08:13.654 00:08:13.654 --- 10.0.0.1 ping statistics --- 00:08:13.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.654 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4125362 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4125362 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 4125362 ']' 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.654 15:44:42 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.654 [2024-07-12 15:44:43.016724] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:08:13.654 [2024-07-12 15:44:43.016816] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.654 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.654 [2024-07-12 15:44:43.079987] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.654 [2024-07-12 15:44:43.190465] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.654 [2024-07-12 15:44:43.190516] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.654 [2024-07-12 15:44:43.190551] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.654 [2024-07-12 15:44:43.190563] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.654 [2024-07-12 15:44:43.190581] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.654 [2024-07-12 15:44:43.190632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.654 [2024-07-12 15:44:43.190702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.654 [2024-07-12 15:44:43.190768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.654 [2024-07-12 15:44:43.190771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.654 [2024-07-12 15:44:43.351993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.654 Null1 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.654 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 [2024-07-12 15:44:43.392284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 Null2 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 Null3 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 Null4 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.912 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:08:13.912 00:08:13.912 Discovery Log Number of Records 6, Generation counter 6 00:08:13.912 =====Discovery Log Entry 0====== 00:08:13.912 trtype: tcp 00:08:13.912 adrfam: ipv4 00:08:13.912 subtype: current discovery subsystem 00:08:13.912 treq: not required 00:08:13.913 portid: 0 00:08:13.913 trsvcid: 4420 00:08:13.913 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:13.913 traddr: 10.0.0.2 00:08:13.913 eflags: explicit discovery connections, duplicate discovery information 00:08:13.913 sectype: none 00:08:13.913 =====Discovery Log Entry 1====== 00:08:13.913 trtype: tcp 00:08:13.913 adrfam: ipv4 00:08:13.913 subtype: nvme subsystem 00:08:13.913 treq: not required 00:08:13.913 portid: 0 00:08:13.913 trsvcid: 4420 00:08:13.913 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:13.913 traddr: 10.0.0.2 00:08:13.913 eflags: none 00:08:13.913 sectype: none 00:08:13.913 =====Discovery Log Entry 2====== 00:08:13.913 trtype: tcp 00:08:13.913 adrfam: ipv4 00:08:13.913 subtype: nvme subsystem 00:08:13.913 treq: not required 00:08:13.913 portid: 0 00:08:13.913 trsvcid: 4420 00:08:13.913 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:13.913 traddr: 10.0.0.2 00:08:13.913 eflags: none 00:08:13.913 sectype: none 00:08:13.913 =====Discovery Log Entry 3====== 00:08:13.913 trtype: tcp 00:08:13.913 adrfam: ipv4 00:08:13.913 subtype: nvme subsystem 00:08:13.913 treq: not required 00:08:13.913 portid: 0 00:08:13.913 trsvcid: 4420 00:08:13.913 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:13.913 traddr: 10.0.0.2 00:08:13.913 eflags: none 00:08:13.913 sectype: none 00:08:13.913 =====Discovery Log Entry 4====== 00:08:13.913 trtype: tcp 00:08:13.913 adrfam: ipv4 00:08:13.913 subtype: nvme subsystem 00:08:13.913 treq: not required 00:08:13.913 portid: 0 00:08:13.913 trsvcid: 4420 00:08:13.913 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:13.913 traddr: 10.0.0.2 00:08:13.913 eflags: none 00:08:13.913 sectype: none 00:08:13.913 =====Discovery Log Entry 5====== 00:08:13.913 trtype: tcp 00:08:13.913 adrfam: ipv4 00:08:13.913 subtype: discovery subsystem referral 00:08:13.913 treq: not required 00:08:13.913 portid: 0 00:08:13.913 trsvcid: 4430 00:08:13.913 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:13.913 traddr: 10.0.0.2 00:08:13.913 eflags: none 00:08:13.913 sectype: none 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:13.913 Perform nvmf subsystem discovery via RPC 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.913 [ 00:08:13.913 { 00:08:13.913 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:13.913 "subtype": "Discovery", 00:08:13.913 "listen_addresses": [ 00:08:13.913 { 00:08:13.913 "trtype": "TCP", 00:08:13.913 "adrfam": "IPv4", 00:08:13.913 "traddr": "10.0.0.2", 00:08:13.913 "trsvcid": "4420" 00:08:13.913 } 00:08:13.913 ], 00:08:13.913 "allow_any_host": true, 00:08:13.913 "hosts": [] 00:08:13.913 }, 00:08:13.913 { 00:08:13.913 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:13.913 "subtype": "NVMe", 00:08:13.913 "listen_addresses": [ 00:08:13.913 { 00:08:13.913 "trtype": "TCP", 00:08:13.913 "adrfam": "IPv4", 00:08:13.913 "traddr": "10.0.0.2", 00:08:13.913 "trsvcid": "4420" 00:08:13.913 } 00:08:13.913 ], 00:08:13.913 "allow_any_host": true, 00:08:13.913 "hosts": [], 00:08:13.913 "serial_number": "SPDK00000000000001", 00:08:13.913 "model_number": "SPDK bdev Controller", 00:08:13.913 "max_namespaces": 32, 00:08:13.913 "min_cntlid": 1, 00:08:13.913 "max_cntlid": 65519, 00:08:13.913 "namespaces": [ 00:08:13.913 { 00:08:13.913 "nsid": 1, 00:08:13.913 "bdev_name": "Null1", 00:08:13.913 "name": "Null1", 00:08:13.913 "nguid": "BFC6605D3FEE433F84361BC5BF0D00A9", 00:08:13.913 "uuid": "bfc6605d-3fee-433f-8436-1bc5bf0d00a9" 00:08:13.913 } 00:08:13.913 ] 00:08:13.913 }, 00:08:13.913 { 00:08:13.913 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:13.913 "subtype": "NVMe", 00:08:13.913 "listen_addresses": [ 00:08:13.913 { 00:08:13.913 "trtype": "TCP", 00:08:13.913 "adrfam": "IPv4", 00:08:13.913 "traddr": "10.0.0.2", 00:08:13.913 "trsvcid": "4420" 00:08:13.913 } 00:08:13.913 ], 00:08:13.913 "allow_any_host": true, 00:08:13.913 "hosts": [], 00:08:13.913 "serial_number": "SPDK00000000000002", 00:08:13.913 "model_number": "SPDK bdev Controller", 00:08:13.913 "max_namespaces": 32, 00:08:13.913 "min_cntlid": 1, 00:08:13.913 "max_cntlid": 65519, 00:08:13.913 "namespaces": [ 00:08:13.913 { 00:08:13.913 "nsid": 1, 00:08:13.913 "bdev_name": "Null2", 00:08:13.913 "name": "Null2", 00:08:13.913 "nguid": "3D6F2D322F064A57BE0E61DB995DA2E4", 00:08:13.913 "uuid": "3d6f2d32-2f06-4a57-be0e-61db995da2e4" 00:08:13.913 } 00:08:13.913 ] 00:08:13.913 }, 00:08:13.913 { 00:08:13.913 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:13.913 "subtype": "NVMe", 00:08:13.913 "listen_addresses": [ 00:08:13.913 { 00:08:13.913 "trtype": "TCP", 00:08:13.913 "adrfam": "IPv4", 00:08:13.913 "traddr": "10.0.0.2", 00:08:13.913 "trsvcid": "4420" 00:08:13.913 } 00:08:13.913 ], 00:08:13.913 "allow_any_host": true, 00:08:13.913 "hosts": [], 00:08:13.913 "serial_number": "SPDK00000000000003", 00:08:13.913 "model_number": "SPDK bdev Controller", 00:08:13.913 "max_namespaces": 32, 00:08:13.913 "min_cntlid": 1, 00:08:13.913 "max_cntlid": 65519, 00:08:13.913 "namespaces": [ 00:08:13.913 { 00:08:13.913 "nsid": 1, 00:08:13.913 "bdev_name": "Null3", 00:08:13.913 "name": "Null3", 00:08:13.913 "nguid": "32B552FA106E4282BB15BE4A54C36474", 00:08:13.913 "uuid": "32b552fa-106e-4282-bb15-be4a54c36474" 00:08:13.913 } 00:08:13.913 ] 00:08:13.913 }, 00:08:13.913 { 00:08:13.913 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:13.913 "subtype": "NVMe", 00:08:13.913 "listen_addresses": [ 00:08:13.913 { 00:08:13.913 "trtype": "TCP", 00:08:13.913 "adrfam": "IPv4", 00:08:13.913 "traddr": "10.0.0.2", 00:08:13.913 "trsvcid": "4420" 00:08:13.913 } 00:08:13.913 ], 00:08:13.913 "allow_any_host": true, 00:08:13.913 "hosts": [], 00:08:13.913 "serial_number": "SPDK00000000000004", 00:08:13.913 "model_number": "SPDK bdev Controller", 00:08:13.913 "max_namespaces": 32, 00:08:13.913 "min_cntlid": 1, 00:08:13.913 "max_cntlid": 65519, 00:08:13.913 "namespaces": [ 00:08:13.913 { 00:08:13.913 "nsid": 1, 00:08:13.913 "bdev_name": "Null4", 00:08:13.913 "name": "Null4", 00:08:13.913 "nguid": "C4A1005EBACC46F49A34BE6EDE57A020", 00:08:13.913 "uuid": "c4a1005e-bacc-46f4-9a34-be6ede57a020" 00:08:13.913 } 00:08:13.913 ] 00:08:13.913 } 00:08:13.913 ] 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.913 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.171 rmmod nvme_tcp 00:08:14.171 rmmod nvme_fabrics 00:08:14.171 rmmod nvme_keyring 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4125362 ']' 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4125362 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 4125362 ']' 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 4125362 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:14.171 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:14.172 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4125362 00:08:14.172 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:14.172 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:14.172 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4125362' 00:08:14.172 killing process with pid 4125362 00:08:14.172 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 4125362 00:08:14.172 15:44:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 4125362 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.431 15:44:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.968 15:44:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.968 00:08:16.968 real 0m5.584s 00:08:16.968 user 0m4.243s 00:08:16.968 sys 0m1.956s 00:08:16.968 15:44:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.968 15:44:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.968 ************************************ 00:08:16.968 END TEST nvmf_target_discovery 00:08:16.968 ************************************ 00:08:16.968 15:44:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:16.968 15:44:46 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:16.968 15:44:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.968 15:44:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.968 15:44:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.968 ************************************ 00:08:16.968 START TEST nvmf_referrals 00:08:16.968 ************************************ 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:16.968 * Looking for test storage... 00:08:16.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:16.968 15:44:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.969 15:44:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.872 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:18.873 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:18.873 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:18.873 Found net devices under 0000:09:00.0: cvl_0_0 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:18.873 Found net devices under 0000:09:00.1: cvl_0_1 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:08:18.873 00:08:18.873 --- 10.0.0.2 ping statistics --- 00:08:18.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.873 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:18.873 00:08:18.873 --- 10.0.0.1 ping statistics --- 00:08:18.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.873 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4127461 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4127461 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 4127461 ']' 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.873 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:18.873 [2024-07-12 15:44:48.573744] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:08:18.873 [2024-07-12 15:44:48.573828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.131 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.131 [2024-07-12 15:44:48.632854] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.131 [2024-07-12 15:44:48.736255] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.131 [2024-07-12 15:44:48.736303] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.131 [2024-07-12 15:44:48.736338] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:19.131 [2024-07-12 15:44:48.736350] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:19.131 [2024-07-12 15:44:48.736360] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.131 [2024-07-12 15:44:48.736449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.131 [2024-07-12 15:44:48.736530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.131 [2024-07-12 15:44:48.736598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.131 [2024-07-12 15:44:48.736600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 [2024-07-12 15:44:48.884250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 [2024-07-12 15:44:48.896495] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.389 15:44:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.389 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.647 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:19.905 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.163 15:44:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.421 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:20.679 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.937 rmmod nvme_tcp 00:08:20.937 rmmod nvme_fabrics 00:08:20.937 rmmod nvme_keyring 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4127461 ']' 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4127461 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 4127461 ']' 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 4127461 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4127461 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4127461' 00:08:20.937 killing process with pid 4127461 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 4127461 00:08:20.937 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 4127461 00:08:21.200 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.201 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.201 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.201 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.201 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.201 15:44:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.201 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.201 15:44:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.140 15:44:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.140 00:08:23.140 real 0m6.650s 00:08:23.140 user 0m9.346s 00:08:23.140 sys 0m2.198s 00:08:23.140 15:44:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.140 15:44:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.140 ************************************ 00:08:23.140 END TEST nvmf_referrals 00:08:23.140 ************************************ 00:08:23.140 15:44:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:23.140 15:44:52 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:23.140 15:44:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.140 15:44:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.140 15:44:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:23.140 ************************************ 00:08:23.140 START TEST nvmf_connect_disconnect 00:08:23.140 ************************************ 00:08:23.140 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:23.399 * Looking for test storage... 00:08:23.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.399 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.400 15:44:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.298 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:25.299 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:25.299 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:25.299 Found net devices under 0000:09:00.0: cvl_0_0 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:25.299 Found net devices under 0000:09:00.1: cvl_0_1 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.299 15:44:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.299 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.299 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.299 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:25.557 00:08:25.557 --- 10.0.0.2 ping statistics --- 00:08:25.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.557 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:08:25.557 00:08:25.557 --- 10.0.0.1 ping statistics --- 00:08:25.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.557 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4129752 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4129752 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 4129752 ']' 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.557 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:25.557 [2024-07-12 15:44:55.218898] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:08:25.557 [2024-07-12 15:44:55.218977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.557 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.814 [2024-07-12 15:44:55.285614] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.814 [2024-07-12 15:44:55.398014] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.814 [2024-07-12 15:44:55.398067] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.814 [2024-07-12 15:44:55.398081] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.814 [2024-07-12 15:44:55.398092] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.814 [2024-07-12 15:44:55.398101] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.814 [2024-07-12 15:44:55.398150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.814 [2024-07-12 15:44:55.398203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.814 [2024-07-12 15:44:55.398272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.814 [2024-07-12 15:44:55.398275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.814 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.814 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:25.814 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.814 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.814 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.072 [2024-07-12 15:44:55.555143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:26.072 [2024-07-12 15:44:55.606520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:26.072 15:44:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:29.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.197 rmmod nvme_tcp 00:08:40.197 rmmod nvme_fabrics 00:08:40.197 rmmod nvme_keyring 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4129752 ']' 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4129752 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4129752 ']' 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 4129752 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4129752 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4129752' 00:08:40.197 killing process with pid 4129752 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 4129752 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 4129752 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.197 15:45:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.108 15:45:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.108 00:08:42.108 real 0m18.810s 00:08:42.108 user 0m56.236s 00:08:42.108 sys 0m3.384s 00:08:42.108 15:45:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.108 15:45:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:42.108 ************************************ 00:08:42.108 END TEST nvmf_connect_disconnect 00:08:42.108 ************************************ 00:08:42.108 15:45:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:42.108 15:45:11 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:42.108 15:45:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.108 15:45:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.108 15:45:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.108 ************************************ 00:08:42.108 START TEST nvmf_multitarget 00:08:42.108 ************************************ 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:42.108 * Looking for test storage... 00:08:42.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.108 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.109 15:45:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:44.645 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:44.645 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:44.645 Found net devices under 0000:09:00.0: cvl_0_0 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:44.645 Found net devices under 0000:09:00.1: cvl_0_1 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.645 15:45:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.645 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:08:44.645 00:08:44.645 --- 10.0.0.2 ping statistics --- 00:08:44.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.645 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:08:44.645 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:08:44.645 00:08:44.645 --- 10.0.0.1 ping statistics --- 00:08:44.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.645 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:44.645 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.645 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:44.645 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.645 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.645 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4134026 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4134026 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 4134026 ']' 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.646 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:44.646 [2024-07-12 15:45:14.095020] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:08:44.646 [2024-07-12 15:45:14.095095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.646 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.646 [2024-07-12 15:45:14.161970] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.646 [2024-07-12 15:45:14.273740] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.646 [2024-07-12 15:45:14.273793] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.646 [2024-07-12 15:45:14.273807] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.646 [2024-07-12 15:45:14.273818] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.646 [2024-07-12 15:45:14.273828] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.646 [2024-07-12 15:45:14.273887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.646 [2024-07-12 15:45:14.273948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.646 [2024-07-12 15:45:14.274011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.646 [2024-07-12 15:45:14.274014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:44.904 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:44.904 "nvmf_tgt_1" 00:08:45.162 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:45.162 "nvmf_tgt_2" 00:08:45.162 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.162 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:45.162 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:45.162 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:45.420 true 00:08:45.420 15:45:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:45.420 true 00:08:45.420 15:45:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:45.420 15:45:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.679 rmmod nvme_tcp 00:08:45.679 rmmod nvme_fabrics 00:08:45.679 rmmod nvme_keyring 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4134026 ']' 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4134026 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 4134026 ']' 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 4134026 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4134026 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4134026' 00:08:45.679 killing process with pid 4134026 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 4134026 00:08:45.679 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 4134026 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.937 15:45:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.474 15:45:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.474 00:08:48.474 real 0m5.882s 00:08:48.474 user 0m6.505s 00:08:48.474 sys 0m1.970s 00:08:48.474 15:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.474 15:45:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:48.474 ************************************ 00:08:48.474 END TEST nvmf_multitarget 00:08:48.474 ************************************ 00:08:48.474 15:45:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:48.474 15:45:17 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:48.474 15:45:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.474 15:45:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.474 15:45:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:48.474 ************************************ 00:08:48.474 START TEST nvmf_rpc 00:08:48.474 ************************************ 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:48.474 * Looking for test storage... 00:08:48.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.474 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.475 15:45:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:50.377 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:50.377 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:50.377 Found net devices under 0000:09:00.0: cvl_0_0 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:50.377 Found net devices under 0000:09:00.1: cvl_0_1 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:08:50.377 00:08:50.377 --- 10.0.0.2 ping statistics --- 00:08:50.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.377 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:50.377 00:08:50.377 --- 10.0.0.1 ping statistics --- 00:08:50.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.377 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4136239 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4136239 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 4136239 ']' 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.377 15:45:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.377 [2024-07-12 15:45:20.000386] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:08:50.377 [2024-07-12 15:45:20.000505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.377 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.377 [2024-07-12 15:45:20.070033] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.635 [2024-07-12 15:45:20.177032] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.635 [2024-07-12 15:45:20.177098] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.635 [2024-07-12 15:45:20.177120] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.635 [2024-07-12 15:45:20.177131] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.635 [2024-07-12 15:45:20.177141] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.635 [2024-07-12 15:45:20.177283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.635 [2024-07-12 15:45:20.177355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.635 [2024-07-12 15:45:20.177420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.635 [2024-07-12 15:45:20.177422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:50.635 "tick_rate": 2700000000, 00:08:50.635 "poll_groups": [ 00:08:50.635 { 00:08:50.635 "name": "nvmf_tgt_poll_group_000", 00:08:50.635 "admin_qpairs": 0, 00:08:50.635 "io_qpairs": 0, 00:08:50.635 "current_admin_qpairs": 0, 00:08:50.635 "current_io_qpairs": 0, 00:08:50.635 "pending_bdev_io": 0, 00:08:50.635 "completed_nvme_io": 0, 00:08:50.635 "transports": [] 00:08:50.635 }, 00:08:50.635 { 00:08:50.635 "name": "nvmf_tgt_poll_group_001", 00:08:50.635 "admin_qpairs": 0, 00:08:50.635 "io_qpairs": 0, 00:08:50.635 "current_admin_qpairs": 0, 00:08:50.635 "current_io_qpairs": 0, 00:08:50.635 "pending_bdev_io": 0, 00:08:50.635 "completed_nvme_io": 0, 00:08:50.635 "transports": [] 00:08:50.635 }, 00:08:50.635 { 00:08:50.635 "name": "nvmf_tgt_poll_group_002", 00:08:50.635 "admin_qpairs": 0, 00:08:50.635 "io_qpairs": 0, 00:08:50.635 "current_admin_qpairs": 0, 00:08:50.635 "current_io_qpairs": 0, 00:08:50.635 "pending_bdev_io": 0, 00:08:50.635 "completed_nvme_io": 0, 00:08:50.635 "transports": [] 00:08:50.635 }, 00:08:50.635 { 00:08:50.635 "name": "nvmf_tgt_poll_group_003", 00:08:50.635 "admin_qpairs": 0, 00:08:50.635 "io_qpairs": 0, 00:08:50.635 "current_admin_qpairs": 0, 00:08:50.635 "current_io_qpairs": 0, 00:08:50.635 "pending_bdev_io": 0, 00:08:50.635 "completed_nvme_io": 0, 00:08:50.635 "transports": [] 00:08:50.635 } 00:08:50.635 ] 00:08:50.635 }' 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:50.635 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.893 [2024-07-12 15:45:20.425472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:50.893 "tick_rate": 2700000000, 00:08:50.893 "poll_groups": [ 00:08:50.893 { 00:08:50.893 "name": "nvmf_tgt_poll_group_000", 00:08:50.893 "admin_qpairs": 0, 00:08:50.893 "io_qpairs": 0, 00:08:50.893 "current_admin_qpairs": 0, 00:08:50.893 "current_io_qpairs": 0, 00:08:50.893 "pending_bdev_io": 0, 00:08:50.893 "completed_nvme_io": 0, 00:08:50.893 "transports": [ 00:08:50.893 { 00:08:50.893 "trtype": "TCP" 00:08:50.893 } 00:08:50.893 ] 00:08:50.893 }, 00:08:50.893 { 00:08:50.893 "name": "nvmf_tgt_poll_group_001", 00:08:50.893 "admin_qpairs": 0, 00:08:50.893 "io_qpairs": 0, 00:08:50.893 "current_admin_qpairs": 0, 00:08:50.893 "current_io_qpairs": 0, 00:08:50.893 "pending_bdev_io": 0, 00:08:50.893 "completed_nvme_io": 0, 00:08:50.893 "transports": [ 00:08:50.893 { 00:08:50.893 "trtype": "TCP" 00:08:50.893 } 00:08:50.893 ] 00:08:50.893 }, 00:08:50.893 { 00:08:50.893 "name": "nvmf_tgt_poll_group_002", 00:08:50.893 "admin_qpairs": 0, 00:08:50.893 "io_qpairs": 0, 00:08:50.893 "current_admin_qpairs": 0, 00:08:50.893 "current_io_qpairs": 0, 00:08:50.893 "pending_bdev_io": 0, 00:08:50.893 "completed_nvme_io": 0, 00:08:50.893 "transports": [ 00:08:50.893 { 00:08:50.893 "trtype": "TCP" 00:08:50.893 } 00:08:50.893 ] 00:08:50.893 }, 00:08:50.893 { 00:08:50.893 "name": "nvmf_tgt_poll_group_003", 00:08:50.893 "admin_qpairs": 0, 00:08:50.893 "io_qpairs": 0, 00:08:50.893 "current_admin_qpairs": 0, 00:08:50.893 "current_io_qpairs": 0, 00:08:50.893 "pending_bdev_io": 0, 00:08:50.893 "completed_nvme_io": 0, 00:08:50.893 "transports": [ 00:08:50.893 { 00:08:50.893 "trtype": "TCP" 00:08:50.893 } 00:08:50.893 ] 00:08:50.893 } 00:08:50.893 ] 00:08:50.893 }' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.893 Malloc1 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.893 [2024-07-12 15:45:20.578946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:50.893 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:08:50.893 [2024-07-12 15:45:20.601533] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:08:51.150 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:51.150 could not add new controller: failed to write to nvme-fabrics device 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.150 15:45:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.714 15:45:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.714 15:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:51.714 15:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.714 15:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:51.714 15:45:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:53.609 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:53.609 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:53.609 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.609 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:53.609 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.609 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:53.609 15:45:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.866 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:53.867 [2024-07-12 15:45:23.461441] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:08:53.867 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:53.867 could not add new controller: failed to write to nvme-fabrics device 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.867 15:45:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.800 15:45:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.800 15:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.800 15:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.800 15:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:54.800 15:45:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.697 [2024-07-12 15:45:26.281410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.697 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:57.263 15:45:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:57.263 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:57.263 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:57.263 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:57.263 15:45:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:59.854 15:45:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:59.854 15:45:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:59.854 15:45:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.854 [2024-07-12 15:45:29.186137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:59.854 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.855 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.855 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.855 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.855 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.855 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.855 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.855 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.112 15:45:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.112 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:00.112 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.112 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:00.112 15:45:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:02.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.640 [2024-07-12 15:45:31.973175] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.640 15:45:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.205 15:45:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.205 15:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:03.205 15:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.205 15:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:03.205 15:45:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.101 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 [2024-07-12 15:45:34.830620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.358 15:45:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.923 15:45:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.923 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:05.923 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.923 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:05.923 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:07.820 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:07.820 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:07.820 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.820 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:07.820 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.820 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:07.820 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:08.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.078 [2024-07-12 15:45:37.609860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.078 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:08.643 15:45:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.643 15:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:08.643 15:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.643 15:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:08.643 15:45:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 [2024-07-12 15:45:40.427997] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 [2024-07-12 15:45:40.476084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 [2024-07-12 15:45:40.524249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.169 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 [2024-07-12 15:45:40.572430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 [2024-07-12 15:45:40.620573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:11.170 "tick_rate": 2700000000, 00:09:11.170 "poll_groups": [ 00:09:11.170 { 00:09:11.170 "name": "nvmf_tgt_poll_group_000", 00:09:11.170 "admin_qpairs": 2, 00:09:11.170 "io_qpairs": 84, 00:09:11.170 "current_admin_qpairs": 0, 00:09:11.170 "current_io_qpairs": 0, 00:09:11.170 "pending_bdev_io": 0, 00:09:11.170 "completed_nvme_io": 187, 00:09:11.170 "transports": [ 00:09:11.170 { 00:09:11.170 "trtype": "TCP" 00:09:11.170 } 00:09:11.170 ] 00:09:11.170 }, 00:09:11.170 { 00:09:11.170 "name": "nvmf_tgt_poll_group_001", 00:09:11.170 "admin_qpairs": 2, 00:09:11.170 "io_qpairs": 84, 00:09:11.170 "current_admin_qpairs": 0, 00:09:11.170 "current_io_qpairs": 0, 00:09:11.170 "pending_bdev_io": 0, 00:09:11.170 "completed_nvme_io": 181, 00:09:11.170 "transports": [ 00:09:11.170 { 00:09:11.170 "trtype": "TCP" 00:09:11.170 } 00:09:11.170 ] 00:09:11.170 }, 00:09:11.170 { 00:09:11.170 "name": "nvmf_tgt_poll_group_002", 00:09:11.170 "admin_qpairs": 1, 00:09:11.170 "io_qpairs": 84, 00:09:11.170 "current_admin_qpairs": 0, 00:09:11.170 "current_io_qpairs": 0, 00:09:11.170 "pending_bdev_io": 0, 00:09:11.170 "completed_nvme_io": 135, 00:09:11.170 "transports": [ 00:09:11.170 { 00:09:11.170 "trtype": "TCP" 00:09:11.170 } 00:09:11.170 ] 00:09:11.170 }, 00:09:11.170 { 00:09:11.170 "name": "nvmf_tgt_poll_group_003", 00:09:11.170 "admin_qpairs": 2, 00:09:11.170 "io_qpairs": 84, 00:09:11.170 "current_admin_qpairs": 0, 00:09:11.170 "current_io_qpairs": 0, 00:09:11.170 "pending_bdev_io": 0, 00:09:11.170 "completed_nvme_io": 183, 00:09:11.170 "transports": [ 00:09:11.170 { 00:09:11.170 "trtype": "TCP" 00:09:11.170 } 00:09:11.170 ] 00:09:11.170 } 00:09:11.170 ] 00:09:11.170 }' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.170 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.170 rmmod nvme_tcp 00:09:11.171 rmmod nvme_fabrics 00:09:11.171 rmmod nvme_keyring 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4136239 ']' 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4136239 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 4136239 ']' 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 4136239 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4136239 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4136239' 00:09:11.171 killing process with pid 4136239 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 4136239 00:09:11.171 15:45:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 4136239 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.429 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.965 15:45:43 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.965 00:09:13.965 real 0m25.529s 00:09:13.965 user 1m22.848s 00:09:13.965 sys 0m4.177s 00:09:13.965 15:45:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.965 15:45:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.965 ************************************ 00:09:13.965 END TEST nvmf_rpc 00:09:13.965 ************************************ 00:09:13.965 15:45:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.965 15:45:43 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:13.965 15:45:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.965 15:45:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.965 15:45:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.965 ************************************ 00:09:13.965 START TEST nvmf_invalid 00:09:13.965 ************************************ 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:13.965 * Looking for test storage... 00:09:13.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.965 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.966 15:45:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:15.865 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:15.865 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:15.865 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:15.866 Found net devices under 0000:09:00.0: cvl_0_0 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:15.866 Found net devices under 0000:09:00.1: cvl_0_1 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:15.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:09:15.866 00:09:15.866 --- 10.0.0.2 ping statistics --- 00:09:15.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.866 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:15.866 00:09:15.866 --- 10.0.0.1 ping statistics --- 00:09:15.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.866 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4140747 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4140747 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 4140747 ']' 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.866 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.866 [2024-07-12 15:45:45.577755] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:09:15.866 [2024-07-12 15:45:45.577821] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.124 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.124 [2024-07-12 15:45:45.638832] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.124 [2024-07-12 15:45:45.741427] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.124 [2024-07-12 15:45:45.741478] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.124 [2024-07-12 15:45:45.741497] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.124 [2024-07-12 15:45:45.741508] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.124 [2024-07-12 15:45:45.741517] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.124 [2024-07-12 15:45:45.741603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.124 [2024-07-12 15:45:45.741668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.124 [2024-07-12 15:45:45.741731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.124 [2024-07-12 15:45:45.741734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:16.382 15:45:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13719 00:09:16.640 [2024-07-12 15:45:46.172933] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:16.640 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:16.640 { 00:09:16.640 "nqn": "nqn.2016-06.io.spdk:cnode13719", 00:09:16.640 "tgt_name": "foobar", 00:09:16.640 "method": "nvmf_create_subsystem", 00:09:16.640 "req_id": 1 00:09:16.640 } 00:09:16.640 Got JSON-RPC error response 00:09:16.640 response: 00:09:16.640 { 00:09:16.640 "code": -32603, 00:09:16.640 "message": "Unable to find target foobar" 00:09:16.640 }' 00:09:16.640 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:16.640 { 00:09:16.640 "nqn": "nqn.2016-06.io.spdk:cnode13719", 00:09:16.640 "tgt_name": "foobar", 00:09:16.640 "method": "nvmf_create_subsystem", 00:09:16.640 "req_id": 1 00:09:16.640 } 00:09:16.640 Got JSON-RPC error response 00:09:16.640 response: 00:09:16.640 { 00:09:16.640 "code": -32603, 00:09:16.640 "message": "Unable to find target foobar" 00:09:16.640 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:16.640 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:16.640 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24076 00:09:16.898 [2024-07-12 15:45:46.465947] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24076: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:16.898 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:16.898 { 00:09:16.898 "nqn": "nqn.2016-06.io.spdk:cnode24076", 00:09:16.898 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:16.898 "method": "nvmf_create_subsystem", 00:09:16.898 "req_id": 1 00:09:16.898 } 00:09:16.898 Got JSON-RPC error response 00:09:16.898 response: 00:09:16.898 { 00:09:16.898 "code": -32602, 00:09:16.898 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:16.898 }' 00:09:16.898 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:16.898 { 00:09:16.898 "nqn": "nqn.2016-06.io.spdk:cnode24076", 00:09:16.898 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:16.898 "method": "nvmf_create_subsystem", 00:09:16.898 "req_id": 1 00:09:16.898 } 00:09:16.898 Got JSON-RPC error response 00:09:16.898 response: 00:09:16.898 { 00:09:16.898 "code": -32602, 00:09:16.898 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:16.898 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:16.898 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:16.898 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13962 00:09:17.157 [2024-07-12 15:45:46.726790] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13962: invalid model number 'SPDK_Controller' 00:09:17.157 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:17.157 { 00:09:17.157 "nqn": "nqn.2016-06.io.spdk:cnode13962", 00:09:17.157 "model_number": "SPDK_Controller\u001f", 00:09:17.157 "method": "nvmf_create_subsystem", 00:09:17.157 "req_id": 1 00:09:17.157 } 00:09:17.157 Got JSON-RPC error response 00:09:17.157 response: 00:09:17.157 { 00:09:17.157 "code": -32602, 00:09:17.157 "message": "Invalid MN SPDK_Controller\u001f" 00:09:17.157 }' 00:09:17.157 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:17.157 { 00:09:17.157 "nqn": "nqn.2016-06.io.spdk:cnode13962", 00:09:17.157 "model_number": "SPDK_Controller\u001f", 00:09:17.157 "method": "nvmf_create_subsystem", 00:09:17.157 "req_id": 1 00:09:17.157 } 00:09:17.157 Got JSON-RPC error response 00:09:17.158 response: 00:09:17.158 { 00:09:17.158 "code": -32602, 00:09:17.158 "message": "Invalid MN SPDK_Controller\u001f" 00:09:17.158 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'pn_@]9'\''`_\Y55P`6E^&&$' 00:09:17.158 15:45:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'pn_@]9'\''`_\Y55P`6E^&&$' nqn.2016-06.io.spdk:cnode12901 00:09:17.455 [2024-07-12 15:45:47.019832] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12901: invalid serial number 'pn_@]9'`_\Y55P`6E^&&$' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:17.455 { 00:09:17.455 "nqn": "nqn.2016-06.io.spdk:cnode12901", 00:09:17.455 "serial_number": "pn_@]9'\''`_\\Y55P`6E^&&$", 00:09:17.455 "method": "nvmf_create_subsystem", 00:09:17.455 "req_id": 1 00:09:17.455 } 00:09:17.455 Got JSON-RPC error response 00:09:17.455 response: 00:09:17.455 { 00:09:17.455 "code": -32602, 00:09:17.455 "message": "Invalid SN pn_@]9'\''`_\\Y55P`6E^&&$" 00:09:17.455 }' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:17.455 { 00:09:17.455 "nqn": "nqn.2016-06.io.spdk:cnode12901", 00:09:17.455 "serial_number": "pn_@]9'`_\\Y55P`6E^&&$", 00:09:17.455 "method": "nvmf_create_subsystem", 00:09:17.455 "req_id": 1 00:09:17.455 } 00:09:17.455 Got JSON-RPC error response 00:09:17.455 response: 00:09:17.455 { 00:09:17.455 "code": -32602, 00:09:17.455 "message": "Invalid SN pn_@]9'`_\\Y55P`6E^&&$" 00:09:17.455 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:17.455 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.456 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.713 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:09:17.714 15:45:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Z.=[~Nt$= /dev/null' 00:09:20.283 15:45:49 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.821 15:45:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.821 00:09:22.821 real 0m8.708s 00:09:22.821 user 0m20.060s 00:09:22.821 sys 0m2.442s 00:09:22.821 15:45:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.821 15:45:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:22.821 ************************************ 00:09:22.821 END TEST nvmf_invalid 00:09:22.821 ************************************ 00:09:22.822 15:45:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:22.822 15:45:51 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:22.822 15:45:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.822 15:45:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.822 15:45:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.822 ************************************ 00:09:22.822 START TEST nvmf_abort 00:09:22.822 ************************************ 00:09:22.822 15:45:51 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:22.822 * Looking for test storage... 00:09:22.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.822 15:45:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:24.739 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:24.739 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:24.739 Found net devices under 0000:09:00.0: cvl_0_0 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.739 15:45:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:24.739 Found net devices under 0000:09:00.1: cvl_0_1 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.739 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:09:24.739 00:09:24.739 --- 10.0.0.2 ping statistics --- 00:09:24.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.740 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:09:24.740 00:09:24.740 --- 10.0.0.1 ping statistics --- 00:09:24.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.740 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4143331 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4143331 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 4143331 ']' 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.740 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.740 [2024-07-12 15:45:54.215753] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:09:24.740 [2024-07-12 15:45:54.215830] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.740 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.740 [2024-07-12 15:45:54.278589] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.740 [2024-07-12 15:45:54.388179] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.740 [2024-07-12 15:45:54.388248] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.740 [2024-07-12 15:45:54.388261] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.740 [2024-07-12 15:45:54.388273] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.740 [2024-07-12 15:45:54.388282] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.740 [2024-07-12 15:45:54.388441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.740 [2024-07-12 15:45:54.390336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.740 [2024-07-12 15:45:54.390349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 [2024-07-12 15:45:54.539687] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 Malloc0 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 Delay0 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 [2024-07-12 15:45:54.606194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.998 15:45:54 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:24.998 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.256 [2024-07-12 15:45:54.752498] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:27.785 Initializing NVMe Controllers 00:09:27.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:27.785 controller IO queue size 128 less than required 00:09:27.785 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:27.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:27.785 Initialization complete. Launching workers. 00:09:27.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27997 00:09:27.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28058, failed to submit 62 00:09:27.785 success 28001, unsuccess 57, failed 0 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:27.785 15:45:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:27.785 rmmod nvme_tcp 00:09:27.785 rmmod nvme_fabrics 00:09:27.785 rmmod nvme_keyring 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4143331 ']' 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4143331 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 4143331 ']' 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 4143331 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4143331 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4143331' 00:09:27.785 killing process with pid 4143331 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 4143331 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 4143331 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:27.785 15:45:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.694 15:45:59 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:29.694 00:09:29.694 real 0m7.411s 00:09:29.694 user 0m10.903s 00:09:29.694 sys 0m2.617s 00:09:29.694 15:45:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:29.694 15:45:59 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:29.694 ************************************ 00:09:29.694 END TEST nvmf_abort 00:09:29.694 ************************************ 00:09:29.694 15:45:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:29.694 15:45:59 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:29.694 15:45:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:29.694 15:45:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.694 15:45:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:29.953 ************************************ 00:09:29.953 START TEST nvmf_ns_hotplug_stress 00:09:29.953 ************************************ 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:29.953 * Looking for test storage... 00:09:29.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.953 15:45:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.856 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:31.857 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:31.857 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:31.857 Found net devices under 0000:09:00.0: cvl_0_0 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:31.857 Found net devices under 0000:09:00.1: cvl_0_1 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.857 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:32.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:09:32.115 00:09:32.115 --- 10.0.0.2 ping statistics --- 00:09:32.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.115 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:09:32.115 00:09:32.115 --- 10.0.0.1 ping statistics --- 00:09:32.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.115 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4145606 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4145606 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 4145606 ']' 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:32.115 15:46:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.115 [2024-07-12 15:46:01.780433] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:09:32.115 [2024-07-12 15:46:01.780520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.115 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.373 [2024-07-12 15:46:01.845561] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:32.373 [2024-07-12 15:46:01.950395] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.373 [2024-07-12 15:46:01.950445] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.373 [2024-07-12 15:46:01.950458] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.373 [2024-07-12 15:46:01.950470] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.373 [2024-07-12 15:46:01.950480] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.374 [2024-07-12 15:46:01.950576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.374 [2024-07-12 15:46:01.950636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.374 [2024-07-12 15:46:01.950639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:32.374 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.939 [2024-07-12 15:46:02.373986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.939 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:33.197 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.197 [2024-07-12 15:46:02.892890] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.197 15:46:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.456 15:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:33.766 Malloc0 00:09:33.766 15:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:34.047 Delay0 00:09:34.047 15:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.305 15:46:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:34.562 NULL1 00:09:34.562 15:46:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:34.820 15:46:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4146012 00:09:34.820 15:46:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:34.820 15:46:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:34.820 15:46:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.820 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.192 Read completed with error (sct=0, sc=11) 00:09:36.192 15:46:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:36.450 15:46:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:36.450 15:46:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:36.707 true 00:09:36.707 15:46:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:36.707 15:46:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.273 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.273 15:46:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.839 15:46:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:37.839 15:46:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:37.839 true 00:09:37.839 15:46:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:37.839 15:46:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.096 15:46:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.354 15:46:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:38.354 15:46:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:38.611 true 00:09:38.611 15:46:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:38.611 15:46:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.868 15:46:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.125 15:46:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:39.125 15:46:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:39.382 true 00:09:39.382 15:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:39.382 15:46:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.755 15:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.755 15:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:40.755 15:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:41.038 true 00:09:41.038 15:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:41.038 15:46:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.972 15:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.972 15:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:41.972 15:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:42.229 true 00:09:42.229 15:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:42.229 15:46:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.487 15:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.745 15:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:42.745 15:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:43.003 true 00:09:43.003 15:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:43.003 15:46:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.934 15:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:43.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.192 15:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:44.192 15:46:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:44.485 true 00:09:44.485 15:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:44.485 15:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.742 15:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.999 15:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:44.999 15:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:45.256 true 00:09:45.256 15:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:45.256 15:46:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.821 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.821 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:45.821 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:46.079 true 00:09:46.079 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:46.079 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.011 15:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.269 15:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:47.269 15:46:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:47.526 true 00:09:47.526 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:47.526 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.784 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.042 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:48.042 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:48.300 true 00:09:48.300 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:48.300 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.260 15:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.260 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.517 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:49.517 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:49.773 true 00:09:49.773 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:49.773 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.030 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.340 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:50.340 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:50.340 true 00:09:50.340 15:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:50.340 15:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.269 15:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.525 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.782 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:51.782 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:52.039 true 00:09:52.039 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:52.039 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.296 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:52.554 15:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:52.554 15:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:52.810 true 00:09:52.810 15:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:52.810 15:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.741 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:54.007 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:54.007 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:54.264 true 00:09:54.264 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:54.264 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.521 15:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.819 15:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:54.819 15:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:55.076 true 00:09:55.076 15:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:55.076 15:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.332 15:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:55.332 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:55.332 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:55.590 true 00:09:55.590 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:55.590 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.960 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.960 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:56.960 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:57.218 true 00:09:57.218 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:57.218 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.475 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.733 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:57.733 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:57.990 true 00:09:57.990 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:57.990 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.248 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.505 15:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:58.505 15:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:58.762 true 00:09:58.762 15:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:09:58.762 15:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.133 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:00.133 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:00.133 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:00.391 true 00:10:00.391 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:10:00.391 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.648 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.906 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:00.906 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:01.164 true 00:10:01.164 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:10:01.164 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.421 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.679 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:01.679 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:01.936 true 00:10:01.936 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:10:01.936 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.869 15:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.127 15:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:03.127 15:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:03.414 true 00:10:03.414 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:10:03.414 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.671 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.928 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:03.928 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:04.185 true 00:10:04.185 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:10:04.185 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.442 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.699 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:04.699 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:04.956 true 00:10:04.956 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:10:04.956 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.889 Initializing NVMe Controllers 00:10:05.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.889 Controller IO queue size 128, less than required. 00:10:05.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.889 Controller IO queue size 128, less than required. 00:10:05.889 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:05.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:05.890 Initialization complete. Launching workers. 00:10:05.890 ======================================================== 00:10:05.890 Latency(us) 00:10:05.890 Device Information : IOPS MiB/s Average min max 00:10:05.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 771.33 0.38 81245.40 3393.34 1024286.82 00:10:05.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9894.60 4.83 12936.58 3266.19 450697.65 00:10:05.890 ======================================================== 00:10:05.890 Total : 10665.93 5.21 17876.51 3266.19 1024286.82 00:10:05.890 00:10:06.147 15:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.405 15:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:06.405 15:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:06.662 true 00:10:06.662 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4146012 00:10:06.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4146012) - No such process 00:10:06.662 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4146012 00:10:06.662 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.920 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.178 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:07.178 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:07.178 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:07.178 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.178 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:07.435 null0 00:10:07.435 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.435 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.435 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:07.692 null1 00:10:07.692 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.693 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.693 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:07.950 null2 00:10:07.950 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:07.950 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:07.950 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:08.207 null3 00:10:08.207 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.207 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.207 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:08.464 null4 00:10:08.464 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.464 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.464 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:08.722 null5 00:10:08.722 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.722 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.722 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:08.980 null6 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:08.980 null7 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:08.980 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4150107 4150108 4150110 4150112 4150114 4150116 4150118 4150120 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.238 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.496 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.496 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.496 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.496 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.496 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.496 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.496 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.496 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.754 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.013 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.272 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.530 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.787 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.787 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.787 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.788 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.045 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.303 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.303 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.303 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.303 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.304 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:11.562 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:11.820 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:12.078 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.336 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:12.594 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:12.852 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:13.110 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.368 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:13.626 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.626 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.626 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:13.626 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:13.626 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:13.626 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:13.626 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.883 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:13.883 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:13.883 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:13.883 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.883 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:13.883 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:13.883 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.141 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:14.399 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:14.656 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:14.657 rmmod nvme_tcp 00:10:14.657 rmmod nvme_fabrics 00:10:14.657 rmmod nvme_keyring 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4145606 ']' 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4145606 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 4145606 ']' 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 4145606 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4145606 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4145606' 00:10:14.657 killing process with pid 4145606 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 4145606 00:10:14.657 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 4145606 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.915 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.479 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:17.479 00:10:17.479 real 0m47.154s 00:10:17.479 user 3m29.031s 00:10:17.479 sys 0m18.548s 00:10:17.479 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.479 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:17.479 ************************************ 00:10:17.479 END TEST nvmf_ns_hotplug_stress 00:10:17.479 ************************************ 00:10:17.479 15:46:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:17.479 15:46:46 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:17.479 15:46:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:17.479 15:46:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.479 15:46:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.479 ************************************ 00:10:17.479 START TEST nvmf_connect_stress 00:10:17.479 ************************************ 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:17.479 * Looking for test storage... 00:10:17.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.479 15:46:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:19.384 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:19.384 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:19.385 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:19.385 Found net devices under 0000:09:00.0: cvl_0_0 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:19.385 Found net devices under 0000:09:00.1: cvl_0_1 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:19.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:10:19.385 00:10:19.385 --- 10.0.0.2 ping statistics --- 00:10:19.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.385 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:10:19.385 00:10:19.385 --- 10.0.0.1 ping statistics --- 00:10:19.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.385 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4152872 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4152872 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 4152872 ']' 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:19.385 15:46:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.385 [2024-07-12 15:46:49.042714] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:10:19.385 [2024-07-12 15:46:49.042802] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.385 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.385 [2024-07-12 15:46:49.107910] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.643 [2024-07-12 15:46:49.215779] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.643 [2024-07-12 15:46:49.215823] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.643 [2024-07-12 15:46:49.215850] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.643 [2024-07-12 15:46:49.215860] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.643 [2024-07-12 15:46:49.215869] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.643 [2024-07-12 15:46:49.215962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.643 [2024-07-12 15:46:49.216028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.643 [2024-07-12 15:46:49.216031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.643 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.643 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:19.643 15:46:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.644 [2024-07-12 15:46:49.361768] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.644 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.901 [2024-07-12 15:46:49.393488] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.901 NULL1 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4153012 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.901 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.902 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.159 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.159 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:20.159 15:46:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.159 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.159 15:46:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.416 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.416 15:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:20.416 15:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.416 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.416 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.981 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.981 15:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:20.981 15:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.981 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:20.981 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.238 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.238 15:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:21.238 15:46:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.238 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.238 15:46:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.496 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.496 15:46:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:21.496 15:46:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.496 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.496 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.752 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.752 15:46:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:21.753 15:46:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.753 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.753 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.009 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.009 15:46:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:22.009 15:46:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.009 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.009 15:46:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.574 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:22.574 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.574 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.574 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.832 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.832 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:22.832 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.832 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.832 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.089 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.089 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:23.089 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.089 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.089 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.347 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.347 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:23.347 15:46:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.347 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.347 15:46:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.604 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.604 15:46:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:23.604 15:46:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.604 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.604 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.168 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.168 15:46:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:24.168 15:46:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.169 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.169 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.427 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.427 15:46:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:24.427 15:46:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.427 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.427 15:46:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.685 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.685 15:46:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:24.685 15:46:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.685 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.685 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.942 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.942 15:46:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:24.942 15:46:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.942 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.942 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.199 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.199 15:46:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:25.199 15:46:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.199 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.199 15:46:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.764 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.764 15:46:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:25.764 15:46:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.764 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.764 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.021 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.021 15:46:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:26.021 15:46:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.021 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.021 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.278 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.278 15:46:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:26.278 15:46:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.278 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.278 15:46:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.535 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.535 15:46:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:26.535 15:46:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.535 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.535 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.792 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.792 15:46:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:26.792 15:46:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.792 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.792 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.356 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.356 15:46:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:27.356 15:46:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.356 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.356 15:46:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.613 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.613 15:46:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:27.613 15:46:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.613 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.613 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.870 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.870 15:46:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:27.870 15:46:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.870 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.870 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.128 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.128 15:46:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:28.128 15:46:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.128 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.128 15:46:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.692 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.692 15:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:28.692 15:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.692 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.692 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.949 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.949 15:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:28.949 15:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.949 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.949 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.206 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.206 15:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:29.206 15:46:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.206 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.206 15:46:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.463 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.463 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:29.463 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.463 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.463 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.720 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.720 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:29.720 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:29.720 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.720 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.976 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4153012 00:10:30.233 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4153012) - No such process 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4153012 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.233 rmmod nvme_tcp 00:10:30.233 rmmod nvme_fabrics 00:10:30.233 rmmod nvme_keyring 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4152872 ']' 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4152872 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 4152872 ']' 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 4152872 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4152872 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4152872' 00:10:30.233 killing process with pid 4152872 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 4152872 00:10:30.233 15:46:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 4152872 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:30.492 15:47:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.073 15:47:02 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.073 00:10:33.073 real 0m15.483s 00:10:33.073 user 0m38.392s 00:10:33.073 sys 0m6.079s 00:10:33.073 15:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.073 15:47:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:33.073 ************************************ 00:10:33.073 END TEST nvmf_connect_stress 00:10:33.073 ************************************ 00:10:33.073 15:47:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:33.073 15:47:02 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:33.073 15:47:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:33.073 15:47:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.073 15:47:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:33.073 ************************************ 00:10:33.074 START TEST nvmf_fused_ordering 00:10:33.074 ************************************ 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:33.074 * Looking for test storage... 00:10:33.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.074 15:47:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:34.977 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:34.977 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:34.977 Found net devices under 0000:09:00.0: cvl_0_0 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:34.977 Found net devices under 0000:09:00.1: cvl_0_1 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:34.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:10:34.977 00:10:34.977 --- 10.0.0.2 ping statistics --- 00:10:34.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.977 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:10:34.977 00:10:34.977 --- 10.0.0.1 ping statistics --- 00:10:34.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.977 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.977 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4156166 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4156166 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4156166 ']' 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.978 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:34.978 [2024-07-12 15:47:04.521248] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:10:34.978 [2024-07-12 15:47:04.521367] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.978 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.978 [2024-07-12 15:47:04.586143] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.978 [2024-07-12 15:47:04.693937] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.978 [2024-07-12 15:47:04.694002] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.978 [2024-07-12 15:47:04.694024] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.978 [2024-07-12 15:47:04.694035] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.978 [2024-07-12 15:47:04.694045] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.978 [2024-07-12 15:47:04.694071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 [2024-07-12 15:47:04.841152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 [2024-07-12 15:47:04.857345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 NULL1 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.236 15:47:04 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:35.236 [2024-07-12 15:47:04.900798] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:10:35.236 [2024-07-12 15:47:04.900833] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4156197 ] 00:10:35.236 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.801 Attached to nqn.2016-06.io.spdk:cnode1 00:10:35.801 Namespace ID: 1 size: 1GB 00:10:35.801 fused_ordering(0) 00:10:35.801 fused_ordering(1) 00:10:35.801 fused_ordering(2) 00:10:35.801 fused_ordering(3) 00:10:35.801 fused_ordering(4) 00:10:35.801 fused_ordering(5) 00:10:35.801 fused_ordering(6) 00:10:35.801 fused_ordering(7) 00:10:35.801 fused_ordering(8) 00:10:35.801 fused_ordering(9) 00:10:35.801 fused_ordering(10) 00:10:35.801 fused_ordering(11) 00:10:35.801 fused_ordering(12) 00:10:35.801 fused_ordering(13) 00:10:35.801 fused_ordering(14) 00:10:35.801 fused_ordering(15) 00:10:35.801 fused_ordering(16) 00:10:35.801 fused_ordering(17) 00:10:35.801 fused_ordering(18) 00:10:35.801 fused_ordering(19) 00:10:35.801 fused_ordering(20) 00:10:35.801 fused_ordering(21) 00:10:35.801 fused_ordering(22) 00:10:35.801 fused_ordering(23) 00:10:35.801 fused_ordering(24) 00:10:35.801 fused_ordering(25) 00:10:35.801 fused_ordering(26) 00:10:35.801 fused_ordering(27) 00:10:35.801 fused_ordering(28) 00:10:35.801 fused_ordering(29) 00:10:35.801 fused_ordering(30) 00:10:35.801 fused_ordering(31) 00:10:35.801 fused_ordering(32) 00:10:35.801 fused_ordering(33) 00:10:35.801 fused_ordering(34) 00:10:35.801 fused_ordering(35) 00:10:35.801 fused_ordering(36) 00:10:35.801 fused_ordering(37) 00:10:35.801 fused_ordering(38) 00:10:35.801 fused_ordering(39) 00:10:35.801 fused_ordering(40) 00:10:35.801 fused_ordering(41) 00:10:35.801 fused_ordering(42) 00:10:35.801 fused_ordering(43) 00:10:35.801 fused_ordering(44) 00:10:35.801 fused_ordering(45) 00:10:35.801 fused_ordering(46) 00:10:35.801 fused_ordering(47) 00:10:35.801 fused_ordering(48) 00:10:35.801 fused_ordering(49) 00:10:35.801 fused_ordering(50) 00:10:35.801 fused_ordering(51) 00:10:35.801 fused_ordering(52) 00:10:35.801 fused_ordering(53) 00:10:35.801 fused_ordering(54) 00:10:35.801 fused_ordering(55) 00:10:35.801 fused_ordering(56) 00:10:35.801 fused_ordering(57) 00:10:35.801 fused_ordering(58) 00:10:35.801 fused_ordering(59) 00:10:35.801 fused_ordering(60) 00:10:35.801 fused_ordering(61) 00:10:35.801 fused_ordering(62) 00:10:35.801 fused_ordering(63) 00:10:35.801 fused_ordering(64) 00:10:35.801 fused_ordering(65) 00:10:35.801 fused_ordering(66) 00:10:35.801 fused_ordering(67) 00:10:35.801 fused_ordering(68) 00:10:35.801 fused_ordering(69) 00:10:35.801 fused_ordering(70) 00:10:35.801 fused_ordering(71) 00:10:35.801 fused_ordering(72) 00:10:35.802 fused_ordering(73) 00:10:35.802 fused_ordering(74) 00:10:35.802 fused_ordering(75) 00:10:35.802 fused_ordering(76) 00:10:35.802 fused_ordering(77) 00:10:35.802 fused_ordering(78) 00:10:35.802 fused_ordering(79) 00:10:35.802 fused_ordering(80) 00:10:35.802 fused_ordering(81) 00:10:35.802 fused_ordering(82) 00:10:35.802 fused_ordering(83) 00:10:35.802 fused_ordering(84) 00:10:35.802 fused_ordering(85) 00:10:35.802 fused_ordering(86) 00:10:35.802 fused_ordering(87) 00:10:35.802 fused_ordering(88) 00:10:35.802 fused_ordering(89) 00:10:35.802 fused_ordering(90) 00:10:35.802 fused_ordering(91) 00:10:35.802 fused_ordering(92) 00:10:35.802 fused_ordering(93) 00:10:35.802 fused_ordering(94) 00:10:35.802 fused_ordering(95) 00:10:35.802 fused_ordering(96) 00:10:35.802 fused_ordering(97) 00:10:35.802 fused_ordering(98) 00:10:35.802 fused_ordering(99) 00:10:35.802 fused_ordering(100) 00:10:35.802 fused_ordering(101) 00:10:35.802 fused_ordering(102) 00:10:35.802 fused_ordering(103) 00:10:35.802 fused_ordering(104) 00:10:35.802 fused_ordering(105) 00:10:35.802 fused_ordering(106) 00:10:35.802 fused_ordering(107) 00:10:35.802 fused_ordering(108) 00:10:35.802 fused_ordering(109) 00:10:35.802 fused_ordering(110) 00:10:35.802 fused_ordering(111) 00:10:35.802 fused_ordering(112) 00:10:35.802 fused_ordering(113) 00:10:35.802 fused_ordering(114) 00:10:35.802 fused_ordering(115) 00:10:35.802 fused_ordering(116) 00:10:35.802 fused_ordering(117) 00:10:35.802 fused_ordering(118) 00:10:35.802 fused_ordering(119) 00:10:35.802 fused_ordering(120) 00:10:35.802 fused_ordering(121) 00:10:35.802 fused_ordering(122) 00:10:35.802 fused_ordering(123) 00:10:35.802 fused_ordering(124) 00:10:35.802 fused_ordering(125) 00:10:35.802 fused_ordering(126) 00:10:35.802 fused_ordering(127) 00:10:35.802 fused_ordering(128) 00:10:35.802 fused_ordering(129) 00:10:35.802 fused_ordering(130) 00:10:35.802 fused_ordering(131) 00:10:35.802 fused_ordering(132) 00:10:35.802 fused_ordering(133) 00:10:35.802 fused_ordering(134) 00:10:35.802 fused_ordering(135) 00:10:35.802 fused_ordering(136) 00:10:35.802 fused_ordering(137) 00:10:35.802 fused_ordering(138) 00:10:35.802 fused_ordering(139) 00:10:35.802 fused_ordering(140) 00:10:35.802 fused_ordering(141) 00:10:35.802 fused_ordering(142) 00:10:35.802 fused_ordering(143) 00:10:35.802 fused_ordering(144) 00:10:35.802 fused_ordering(145) 00:10:35.802 fused_ordering(146) 00:10:35.802 fused_ordering(147) 00:10:35.802 fused_ordering(148) 00:10:35.802 fused_ordering(149) 00:10:35.802 fused_ordering(150) 00:10:35.802 fused_ordering(151) 00:10:35.802 fused_ordering(152) 00:10:35.802 fused_ordering(153) 00:10:35.802 fused_ordering(154) 00:10:35.802 fused_ordering(155) 00:10:35.802 fused_ordering(156) 00:10:35.802 fused_ordering(157) 00:10:35.802 fused_ordering(158) 00:10:35.802 fused_ordering(159) 00:10:35.802 fused_ordering(160) 00:10:35.802 fused_ordering(161) 00:10:35.802 fused_ordering(162) 00:10:35.802 fused_ordering(163) 00:10:35.802 fused_ordering(164) 00:10:35.802 fused_ordering(165) 00:10:35.802 fused_ordering(166) 00:10:35.802 fused_ordering(167) 00:10:35.802 fused_ordering(168) 00:10:35.802 fused_ordering(169) 00:10:35.802 fused_ordering(170) 00:10:35.802 fused_ordering(171) 00:10:35.802 fused_ordering(172) 00:10:35.802 fused_ordering(173) 00:10:35.802 fused_ordering(174) 00:10:35.802 fused_ordering(175) 00:10:35.802 fused_ordering(176) 00:10:35.802 fused_ordering(177) 00:10:35.802 fused_ordering(178) 00:10:35.802 fused_ordering(179) 00:10:35.802 fused_ordering(180) 00:10:35.802 fused_ordering(181) 00:10:35.802 fused_ordering(182) 00:10:35.802 fused_ordering(183) 00:10:35.802 fused_ordering(184) 00:10:35.802 fused_ordering(185) 00:10:35.802 fused_ordering(186) 00:10:35.802 fused_ordering(187) 00:10:35.802 fused_ordering(188) 00:10:35.802 fused_ordering(189) 00:10:35.802 fused_ordering(190) 00:10:35.802 fused_ordering(191) 00:10:35.802 fused_ordering(192) 00:10:35.802 fused_ordering(193) 00:10:35.802 fused_ordering(194) 00:10:35.802 fused_ordering(195) 00:10:35.802 fused_ordering(196) 00:10:35.802 fused_ordering(197) 00:10:35.802 fused_ordering(198) 00:10:35.802 fused_ordering(199) 00:10:35.802 fused_ordering(200) 00:10:35.802 fused_ordering(201) 00:10:35.802 fused_ordering(202) 00:10:35.802 fused_ordering(203) 00:10:35.802 fused_ordering(204) 00:10:35.802 fused_ordering(205) 00:10:36.368 fused_ordering(206) 00:10:36.368 fused_ordering(207) 00:10:36.368 fused_ordering(208) 00:10:36.368 fused_ordering(209) 00:10:36.368 fused_ordering(210) 00:10:36.368 fused_ordering(211) 00:10:36.368 fused_ordering(212) 00:10:36.368 fused_ordering(213) 00:10:36.368 fused_ordering(214) 00:10:36.368 fused_ordering(215) 00:10:36.368 fused_ordering(216) 00:10:36.368 fused_ordering(217) 00:10:36.368 fused_ordering(218) 00:10:36.368 fused_ordering(219) 00:10:36.368 fused_ordering(220) 00:10:36.368 fused_ordering(221) 00:10:36.368 fused_ordering(222) 00:10:36.368 fused_ordering(223) 00:10:36.368 fused_ordering(224) 00:10:36.368 fused_ordering(225) 00:10:36.368 fused_ordering(226) 00:10:36.368 fused_ordering(227) 00:10:36.368 fused_ordering(228) 00:10:36.368 fused_ordering(229) 00:10:36.368 fused_ordering(230) 00:10:36.368 fused_ordering(231) 00:10:36.368 fused_ordering(232) 00:10:36.368 fused_ordering(233) 00:10:36.368 fused_ordering(234) 00:10:36.368 fused_ordering(235) 00:10:36.368 fused_ordering(236) 00:10:36.368 fused_ordering(237) 00:10:36.368 fused_ordering(238) 00:10:36.368 fused_ordering(239) 00:10:36.368 fused_ordering(240) 00:10:36.368 fused_ordering(241) 00:10:36.368 fused_ordering(242) 00:10:36.368 fused_ordering(243) 00:10:36.368 fused_ordering(244) 00:10:36.368 fused_ordering(245) 00:10:36.368 fused_ordering(246) 00:10:36.368 fused_ordering(247) 00:10:36.368 fused_ordering(248) 00:10:36.368 fused_ordering(249) 00:10:36.368 fused_ordering(250) 00:10:36.368 fused_ordering(251) 00:10:36.368 fused_ordering(252) 00:10:36.368 fused_ordering(253) 00:10:36.368 fused_ordering(254) 00:10:36.368 fused_ordering(255) 00:10:36.368 fused_ordering(256) 00:10:36.368 fused_ordering(257) 00:10:36.368 fused_ordering(258) 00:10:36.368 fused_ordering(259) 00:10:36.368 fused_ordering(260) 00:10:36.368 fused_ordering(261) 00:10:36.368 fused_ordering(262) 00:10:36.368 fused_ordering(263) 00:10:36.368 fused_ordering(264) 00:10:36.368 fused_ordering(265) 00:10:36.368 fused_ordering(266) 00:10:36.368 fused_ordering(267) 00:10:36.368 fused_ordering(268) 00:10:36.368 fused_ordering(269) 00:10:36.368 fused_ordering(270) 00:10:36.368 fused_ordering(271) 00:10:36.368 fused_ordering(272) 00:10:36.368 fused_ordering(273) 00:10:36.368 fused_ordering(274) 00:10:36.368 fused_ordering(275) 00:10:36.368 fused_ordering(276) 00:10:36.368 fused_ordering(277) 00:10:36.368 fused_ordering(278) 00:10:36.368 fused_ordering(279) 00:10:36.368 fused_ordering(280) 00:10:36.368 fused_ordering(281) 00:10:36.368 fused_ordering(282) 00:10:36.368 fused_ordering(283) 00:10:36.368 fused_ordering(284) 00:10:36.368 fused_ordering(285) 00:10:36.368 fused_ordering(286) 00:10:36.368 fused_ordering(287) 00:10:36.368 fused_ordering(288) 00:10:36.368 fused_ordering(289) 00:10:36.368 fused_ordering(290) 00:10:36.368 fused_ordering(291) 00:10:36.368 fused_ordering(292) 00:10:36.368 fused_ordering(293) 00:10:36.368 fused_ordering(294) 00:10:36.368 fused_ordering(295) 00:10:36.368 fused_ordering(296) 00:10:36.368 fused_ordering(297) 00:10:36.368 fused_ordering(298) 00:10:36.368 fused_ordering(299) 00:10:36.368 fused_ordering(300) 00:10:36.368 fused_ordering(301) 00:10:36.368 fused_ordering(302) 00:10:36.368 fused_ordering(303) 00:10:36.368 fused_ordering(304) 00:10:36.368 fused_ordering(305) 00:10:36.368 fused_ordering(306) 00:10:36.368 fused_ordering(307) 00:10:36.368 fused_ordering(308) 00:10:36.368 fused_ordering(309) 00:10:36.368 fused_ordering(310) 00:10:36.368 fused_ordering(311) 00:10:36.368 fused_ordering(312) 00:10:36.368 fused_ordering(313) 00:10:36.368 fused_ordering(314) 00:10:36.368 fused_ordering(315) 00:10:36.368 fused_ordering(316) 00:10:36.368 fused_ordering(317) 00:10:36.368 fused_ordering(318) 00:10:36.368 fused_ordering(319) 00:10:36.368 fused_ordering(320) 00:10:36.368 fused_ordering(321) 00:10:36.368 fused_ordering(322) 00:10:36.368 fused_ordering(323) 00:10:36.368 fused_ordering(324) 00:10:36.368 fused_ordering(325) 00:10:36.368 fused_ordering(326) 00:10:36.368 fused_ordering(327) 00:10:36.368 fused_ordering(328) 00:10:36.368 fused_ordering(329) 00:10:36.368 fused_ordering(330) 00:10:36.368 fused_ordering(331) 00:10:36.368 fused_ordering(332) 00:10:36.368 fused_ordering(333) 00:10:36.368 fused_ordering(334) 00:10:36.368 fused_ordering(335) 00:10:36.368 fused_ordering(336) 00:10:36.368 fused_ordering(337) 00:10:36.368 fused_ordering(338) 00:10:36.368 fused_ordering(339) 00:10:36.368 fused_ordering(340) 00:10:36.368 fused_ordering(341) 00:10:36.368 fused_ordering(342) 00:10:36.368 fused_ordering(343) 00:10:36.368 fused_ordering(344) 00:10:36.368 fused_ordering(345) 00:10:36.368 fused_ordering(346) 00:10:36.368 fused_ordering(347) 00:10:36.368 fused_ordering(348) 00:10:36.368 fused_ordering(349) 00:10:36.368 fused_ordering(350) 00:10:36.368 fused_ordering(351) 00:10:36.368 fused_ordering(352) 00:10:36.368 fused_ordering(353) 00:10:36.368 fused_ordering(354) 00:10:36.368 fused_ordering(355) 00:10:36.368 fused_ordering(356) 00:10:36.368 fused_ordering(357) 00:10:36.368 fused_ordering(358) 00:10:36.368 fused_ordering(359) 00:10:36.368 fused_ordering(360) 00:10:36.368 fused_ordering(361) 00:10:36.368 fused_ordering(362) 00:10:36.368 fused_ordering(363) 00:10:36.368 fused_ordering(364) 00:10:36.368 fused_ordering(365) 00:10:36.368 fused_ordering(366) 00:10:36.368 fused_ordering(367) 00:10:36.368 fused_ordering(368) 00:10:36.368 fused_ordering(369) 00:10:36.368 fused_ordering(370) 00:10:36.368 fused_ordering(371) 00:10:36.368 fused_ordering(372) 00:10:36.368 fused_ordering(373) 00:10:36.368 fused_ordering(374) 00:10:36.368 fused_ordering(375) 00:10:36.368 fused_ordering(376) 00:10:36.368 fused_ordering(377) 00:10:36.368 fused_ordering(378) 00:10:36.368 fused_ordering(379) 00:10:36.368 fused_ordering(380) 00:10:36.368 fused_ordering(381) 00:10:36.368 fused_ordering(382) 00:10:36.368 fused_ordering(383) 00:10:36.368 fused_ordering(384) 00:10:36.368 fused_ordering(385) 00:10:36.368 fused_ordering(386) 00:10:36.368 fused_ordering(387) 00:10:36.368 fused_ordering(388) 00:10:36.368 fused_ordering(389) 00:10:36.368 fused_ordering(390) 00:10:36.368 fused_ordering(391) 00:10:36.368 fused_ordering(392) 00:10:36.368 fused_ordering(393) 00:10:36.368 fused_ordering(394) 00:10:36.368 fused_ordering(395) 00:10:36.368 fused_ordering(396) 00:10:36.368 fused_ordering(397) 00:10:36.368 fused_ordering(398) 00:10:36.368 fused_ordering(399) 00:10:36.368 fused_ordering(400) 00:10:36.368 fused_ordering(401) 00:10:36.368 fused_ordering(402) 00:10:36.368 fused_ordering(403) 00:10:36.368 fused_ordering(404) 00:10:36.368 fused_ordering(405) 00:10:36.368 fused_ordering(406) 00:10:36.368 fused_ordering(407) 00:10:36.368 fused_ordering(408) 00:10:36.368 fused_ordering(409) 00:10:36.368 fused_ordering(410) 00:10:36.934 fused_ordering(411) 00:10:36.934 fused_ordering(412) 00:10:36.934 fused_ordering(413) 00:10:36.934 fused_ordering(414) 00:10:36.934 fused_ordering(415) 00:10:36.934 fused_ordering(416) 00:10:36.934 fused_ordering(417) 00:10:36.934 fused_ordering(418) 00:10:36.934 fused_ordering(419) 00:10:36.934 fused_ordering(420) 00:10:36.934 fused_ordering(421) 00:10:36.934 fused_ordering(422) 00:10:36.934 fused_ordering(423) 00:10:36.934 fused_ordering(424) 00:10:36.934 fused_ordering(425) 00:10:36.934 fused_ordering(426) 00:10:36.934 fused_ordering(427) 00:10:36.934 fused_ordering(428) 00:10:36.934 fused_ordering(429) 00:10:36.934 fused_ordering(430) 00:10:36.934 fused_ordering(431) 00:10:36.934 fused_ordering(432) 00:10:36.934 fused_ordering(433) 00:10:36.934 fused_ordering(434) 00:10:36.934 fused_ordering(435) 00:10:36.934 fused_ordering(436) 00:10:36.934 fused_ordering(437) 00:10:36.934 fused_ordering(438) 00:10:36.934 fused_ordering(439) 00:10:36.934 fused_ordering(440) 00:10:36.934 fused_ordering(441) 00:10:36.934 fused_ordering(442) 00:10:36.934 fused_ordering(443) 00:10:36.934 fused_ordering(444) 00:10:36.934 fused_ordering(445) 00:10:36.934 fused_ordering(446) 00:10:36.934 fused_ordering(447) 00:10:36.934 fused_ordering(448) 00:10:36.934 fused_ordering(449) 00:10:36.934 fused_ordering(450) 00:10:36.934 fused_ordering(451) 00:10:36.934 fused_ordering(452) 00:10:36.934 fused_ordering(453) 00:10:36.934 fused_ordering(454) 00:10:36.934 fused_ordering(455) 00:10:36.934 fused_ordering(456) 00:10:36.934 fused_ordering(457) 00:10:36.934 fused_ordering(458) 00:10:36.934 fused_ordering(459) 00:10:36.934 fused_ordering(460) 00:10:36.934 fused_ordering(461) 00:10:36.934 fused_ordering(462) 00:10:36.934 fused_ordering(463) 00:10:36.934 fused_ordering(464) 00:10:36.934 fused_ordering(465) 00:10:36.934 fused_ordering(466) 00:10:36.934 fused_ordering(467) 00:10:36.934 fused_ordering(468) 00:10:36.934 fused_ordering(469) 00:10:36.934 fused_ordering(470) 00:10:36.934 fused_ordering(471) 00:10:36.934 fused_ordering(472) 00:10:36.934 fused_ordering(473) 00:10:36.934 fused_ordering(474) 00:10:36.934 fused_ordering(475) 00:10:36.934 fused_ordering(476) 00:10:36.934 fused_ordering(477) 00:10:36.934 fused_ordering(478) 00:10:36.934 fused_ordering(479) 00:10:36.934 fused_ordering(480) 00:10:36.934 fused_ordering(481) 00:10:36.934 fused_ordering(482) 00:10:36.934 fused_ordering(483) 00:10:36.934 fused_ordering(484) 00:10:36.934 fused_ordering(485) 00:10:36.934 fused_ordering(486) 00:10:36.934 fused_ordering(487) 00:10:36.934 fused_ordering(488) 00:10:36.934 fused_ordering(489) 00:10:36.934 fused_ordering(490) 00:10:36.934 fused_ordering(491) 00:10:36.934 fused_ordering(492) 00:10:36.934 fused_ordering(493) 00:10:36.934 fused_ordering(494) 00:10:36.934 fused_ordering(495) 00:10:36.934 fused_ordering(496) 00:10:36.934 fused_ordering(497) 00:10:36.934 fused_ordering(498) 00:10:36.934 fused_ordering(499) 00:10:36.934 fused_ordering(500) 00:10:36.934 fused_ordering(501) 00:10:36.934 fused_ordering(502) 00:10:36.934 fused_ordering(503) 00:10:36.934 fused_ordering(504) 00:10:36.934 fused_ordering(505) 00:10:36.934 fused_ordering(506) 00:10:36.934 fused_ordering(507) 00:10:36.934 fused_ordering(508) 00:10:36.934 fused_ordering(509) 00:10:36.934 fused_ordering(510) 00:10:36.934 fused_ordering(511) 00:10:36.934 fused_ordering(512) 00:10:36.934 fused_ordering(513) 00:10:36.934 fused_ordering(514) 00:10:36.934 fused_ordering(515) 00:10:36.934 fused_ordering(516) 00:10:36.934 fused_ordering(517) 00:10:36.934 fused_ordering(518) 00:10:36.934 fused_ordering(519) 00:10:36.934 fused_ordering(520) 00:10:36.934 fused_ordering(521) 00:10:36.934 fused_ordering(522) 00:10:36.934 fused_ordering(523) 00:10:36.934 fused_ordering(524) 00:10:36.934 fused_ordering(525) 00:10:36.934 fused_ordering(526) 00:10:36.934 fused_ordering(527) 00:10:36.934 fused_ordering(528) 00:10:36.934 fused_ordering(529) 00:10:36.934 fused_ordering(530) 00:10:36.934 fused_ordering(531) 00:10:36.934 fused_ordering(532) 00:10:36.934 fused_ordering(533) 00:10:36.934 fused_ordering(534) 00:10:36.934 fused_ordering(535) 00:10:36.934 fused_ordering(536) 00:10:36.934 fused_ordering(537) 00:10:36.934 fused_ordering(538) 00:10:36.934 fused_ordering(539) 00:10:36.934 fused_ordering(540) 00:10:36.934 fused_ordering(541) 00:10:36.934 fused_ordering(542) 00:10:36.934 fused_ordering(543) 00:10:36.934 fused_ordering(544) 00:10:36.934 fused_ordering(545) 00:10:36.934 fused_ordering(546) 00:10:36.934 fused_ordering(547) 00:10:36.934 fused_ordering(548) 00:10:36.934 fused_ordering(549) 00:10:36.934 fused_ordering(550) 00:10:36.934 fused_ordering(551) 00:10:36.934 fused_ordering(552) 00:10:36.934 fused_ordering(553) 00:10:36.934 fused_ordering(554) 00:10:36.934 fused_ordering(555) 00:10:36.934 fused_ordering(556) 00:10:36.934 fused_ordering(557) 00:10:36.934 fused_ordering(558) 00:10:36.934 fused_ordering(559) 00:10:36.934 fused_ordering(560) 00:10:36.934 fused_ordering(561) 00:10:36.934 fused_ordering(562) 00:10:36.934 fused_ordering(563) 00:10:36.934 fused_ordering(564) 00:10:36.934 fused_ordering(565) 00:10:36.934 fused_ordering(566) 00:10:36.934 fused_ordering(567) 00:10:36.934 fused_ordering(568) 00:10:36.934 fused_ordering(569) 00:10:36.934 fused_ordering(570) 00:10:36.934 fused_ordering(571) 00:10:36.934 fused_ordering(572) 00:10:36.934 fused_ordering(573) 00:10:36.934 fused_ordering(574) 00:10:36.935 fused_ordering(575) 00:10:36.935 fused_ordering(576) 00:10:36.935 fused_ordering(577) 00:10:36.935 fused_ordering(578) 00:10:36.935 fused_ordering(579) 00:10:36.935 fused_ordering(580) 00:10:36.935 fused_ordering(581) 00:10:36.935 fused_ordering(582) 00:10:36.935 fused_ordering(583) 00:10:36.935 fused_ordering(584) 00:10:36.935 fused_ordering(585) 00:10:36.935 fused_ordering(586) 00:10:36.935 fused_ordering(587) 00:10:36.935 fused_ordering(588) 00:10:36.935 fused_ordering(589) 00:10:36.935 fused_ordering(590) 00:10:36.935 fused_ordering(591) 00:10:36.935 fused_ordering(592) 00:10:36.935 fused_ordering(593) 00:10:36.935 fused_ordering(594) 00:10:36.935 fused_ordering(595) 00:10:36.935 fused_ordering(596) 00:10:36.935 fused_ordering(597) 00:10:36.935 fused_ordering(598) 00:10:36.935 fused_ordering(599) 00:10:36.935 fused_ordering(600) 00:10:36.935 fused_ordering(601) 00:10:36.935 fused_ordering(602) 00:10:36.935 fused_ordering(603) 00:10:36.935 fused_ordering(604) 00:10:36.935 fused_ordering(605) 00:10:36.935 fused_ordering(606) 00:10:36.935 fused_ordering(607) 00:10:36.935 fused_ordering(608) 00:10:36.935 fused_ordering(609) 00:10:36.935 fused_ordering(610) 00:10:36.935 fused_ordering(611) 00:10:36.935 fused_ordering(612) 00:10:36.935 fused_ordering(613) 00:10:36.935 fused_ordering(614) 00:10:36.935 fused_ordering(615) 00:10:37.501 fused_ordering(616) 00:10:37.501 fused_ordering(617) 00:10:37.501 fused_ordering(618) 00:10:37.501 fused_ordering(619) 00:10:37.501 fused_ordering(620) 00:10:37.501 fused_ordering(621) 00:10:37.501 fused_ordering(622) 00:10:37.501 fused_ordering(623) 00:10:37.501 fused_ordering(624) 00:10:37.501 fused_ordering(625) 00:10:37.501 fused_ordering(626) 00:10:37.501 fused_ordering(627) 00:10:37.501 fused_ordering(628) 00:10:37.501 fused_ordering(629) 00:10:37.501 fused_ordering(630) 00:10:37.501 fused_ordering(631) 00:10:37.501 fused_ordering(632) 00:10:37.501 fused_ordering(633) 00:10:37.501 fused_ordering(634) 00:10:37.501 fused_ordering(635) 00:10:37.501 fused_ordering(636) 00:10:37.501 fused_ordering(637) 00:10:37.501 fused_ordering(638) 00:10:37.501 fused_ordering(639) 00:10:37.501 fused_ordering(640) 00:10:37.501 fused_ordering(641) 00:10:37.501 fused_ordering(642) 00:10:37.501 fused_ordering(643) 00:10:37.501 fused_ordering(644) 00:10:37.501 fused_ordering(645) 00:10:37.501 fused_ordering(646) 00:10:37.501 fused_ordering(647) 00:10:37.501 fused_ordering(648) 00:10:37.501 fused_ordering(649) 00:10:37.501 fused_ordering(650) 00:10:37.501 fused_ordering(651) 00:10:37.501 fused_ordering(652) 00:10:37.501 fused_ordering(653) 00:10:37.501 fused_ordering(654) 00:10:37.501 fused_ordering(655) 00:10:37.501 fused_ordering(656) 00:10:37.501 fused_ordering(657) 00:10:37.501 fused_ordering(658) 00:10:37.501 fused_ordering(659) 00:10:37.501 fused_ordering(660) 00:10:37.501 fused_ordering(661) 00:10:37.501 fused_ordering(662) 00:10:37.501 fused_ordering(663) 00:10:37.501 fused_ordering(664) 00:10:37.501 fused_ordering(665) 00:10:37.501 fused_ordering(666) 00:10:37.501 fused_ordering(667) 00:10:37.501 fused_ordering(668) 00:10:37.501 fused_ordering(669) 00:10:37.501 fused_ordering(670) 00:10:37.501 fused_ordering(671) 00:10:37.501 fused_ordering(672) 00:10:37.501 fused_ordering(673) 00:10:37.501 fused_ordering(674) 00:10:37.501 fused_ordering(675) 00:10:37.501 fused_ordering(676) 00:10:37.501 fused_ordering(677) 00:10:37.501 fused_ordering(678) 00:10:37.501 fused_ordering(679) 00:10:37.501 fused_ordering(680) 00:10:37.501 fused_ordering(681) 00:10:37.501 fused_ordering(682) 00:10:37.501 fused_ordering(683) 00:10:37.501 fused_ordering(684) 00:10:37.501 fused_ordering(685) 00:10:37.501 fused_ordering(686) 00:10:37.501 fused_ordering(687) 00:10:37.501 fused_ordering(688) 00:10:37.501 fused_ordering(689) 00:10:37.501 fused_ordering(690) 00:10:37.501 fused_ordering(691) 00:10:37.501 fused_ordering(692) 00:10:37.501 fused_ordering(693) 00:10:37.501 fused_ordering(694) 00:10:37.501 fused_ordering(695) 00:10:37.501 fused_ordering(696) 00:10:37.501 fused_ordering(697) 00:10:37.501 fused_ordering(698) 00:10:37.501 fused_ordering(699) 00:10:37.501 fused_ordering(700) 00:10:37.501 fused_ordering(701) 00:10:37.501 fused_ordering(702) 00:10:37.501 fused_ordering(703) 00:10:37.501 fused_ordering(704) 00:10:37.501 fused_ordering(705) 00:10:37.501 fused_ordering(706) 00:10:37.501 fused_ordering(707) 00:10:37.501 fused_ordering(708) 00:10:37.501 fused_ordering(709) 00:10:37.501 fused_ordering(710) 00:10:37.501 fused_ordering(711) 00:10:37.501 fused_ordering(712) 00:10:37.501 fused_ordering(713) 00:10:37.501 fused_ordering(714) 00:10:37.501 fused_ordering(715) 00:10:37.501 fused_ordering(716) 00:10:37.501 fused_ordering(717) 00:10:37.501 fused_ordering(718) 00:10:37.501 fused_ordering(719) 00:10:37.501 fused_ordering(720) 00:10:37.501 fused_ordering(721) 00:10:37.501 fused_ordering(722) 00:10:37.501 fused_ordering(723) 00:10:37.501 fused_ordering(724) 00:10:37.501 fused_ordering(725) 00:10:37.501 fused_ordering(726) 00:10:37.501 fused_ordering(727) 00:10:37.501 fused_ordering(728) 00:10:37.501 fused_ordering(729) 00:10:37.501 fused_ordering(730) 00:10:37.501 fused_ordering(731) 00:10:37.501 fused_ordering(732) 00:10:37.501 fused_ordering(733) 00:10:37.501 fused_ordering(734) 00:10:37.501 fused_ordering(735) 00:10:37.501 fused_ordering(736) 00:10:37.501 fused_ordering(737) 00:10:37.501 fused_ordering(738) 00:10:37.501 fused_ordering(739) 00:10:37.501 fused_ordering(740) 00:10:37.501 fused_ordering(741) 00:10:37.501 fused_ordering(742) 00:10:37.501 fused_ordering(743) 00:10:37.501 fused_ordering(744) 00:10:37.501 fused_ordering(745) 00:10:37.501 fused_ordering(746) 00:10:37.501 fused_ordering(747) 00:10:37.501 fused_ordering(748) 00:10:37.501 fused_ordering(749) 00:10:37.501 fused_ordering(750) 00:10:37.501 fused_ordering(751) 00:10:37.501 fused_ordering(752) 00:10:37.501 fused_ordering(753) 00:10:37.501 fused_ordering(754) 00:10:37.501 fused_ordering(755) 00:10:37.501 fused_ordering(756) 00:10:37.501 fused_ordering(757) 00:10:37.501 fused_ordering(758) 00:10:37.501 fused_ordering(759) 00:10:37.501 fused_ordering(760) 00:10:37.501 fused_ordering(761) 00:10:37.501 fused_ordering(762) 00:10:37.501 fused_ordering(763) 00:10:37.501 fused_ordering(764) 00:10:37.501 fused_ordering(765) 00:10:37.501 fused_ordering(766) 00:10:37.501 fused_ordering(767) 00:10:37.501 fused_ordering(768) 00:10:37.501 fused_ordering(769) 00:10:37.501 fused_ordering(770) 00:10:37.501 fused_ordering(771) 00:10:37.501 fused_ordering(772) 00:10:37.501 fused_ordering(773) 00:10:37.501 fused_ordering(774) 00:10:37.501 fused_ordering(775) 00:10:37.501 fused_ordering(776) 00:10:37.501 fused_ordering(777) 00:10:37.501 fused_ordering(778) 00:10:37.501 fused_ordering(779) 00:10:37.502 fused_ordering(780) 00:10:37.502 fused_ordering(781) 00:10:37.502 fused_ordering(782) 00:10:37.502 fused_ordering(783) 00:10:37.502 fused_ordering(784) 00:10:37.502 fused_ordering(785) 00:10:37.502 fused_ordering(786) 00:10:37.502 fused_ordering(787) 00:10:37.502 fused_ordering(788) 00:10:37.502 fused_ordering(789) 00:10:37.502 fused_ordering(790) 00:10:37.502 fused_ordering(791) 00:10:37.502 fused_ordering(792) 00:10:37.502 fused_ordering(793) 00:10:37.502 fused_ordering(794) 00:10:37.502 fused_ordering(795) 00:10:37.502 fused_ordering(796) 00:10:37.502 fused_ordering(797) 00:10:37.502 fused_ordering(798) 00:10:37.502 fused_ordering(799) 00:10:37.502 fused_ordering(800) 00:10:37.502 fused_ordering(801) 00:10:37.502 fused_ordering(802) 00:10:37.502 fused_ordering(803) 00:10:37.502 fused_ordering(804) 00:10:37.502 fused_ordering(805) 00:10:37.502 fused_ordering(806) 00:10:37.502 fused_ordering(807) 00:10:37.502 fused_ordering(808) 00:10:37.502 fused_ordering(809) 00:10:37.502 fused_ordering(810) 00:10:37.502 fused_ordering(811) 00:10:37.502 fused_ordering(812) 00:10:37.502 fused_ordering(813) 00:10:37.502 fused_ordering(814) 00:10:37.502 fused_ordering(815) 00:10:37.502 fused_ordering(816) 00:10:37.502 fused_ordering(817) 00:10:37.502 fused_ordering(818) 00:10:37.502 fused_ordering(819) 00:10:37.502 fused_ordering(820) 00:10:38.436 fused_ordering(821) 00:10:38.436 fused_ordering(822) 00:10:38.436 fused_ordering(823) 00:10:38.436 fused_ordering(824) 00:10:38.436 fused_ordering(825) 00:10:38.436 fused_ordering(826) 00:10:38.436 fused_ordering(827) 00:10:38.436 fused_ordering(828) 00:10:38.436 fused_ordering(829) 00:10:38.436 fused_ordering(830) 00:10:38.436 fused_ordering(831) 00:10:38.436 fused_ordering(832) 00:10:38.436 fused_ordering(833) 00:10:38.436 fused_ordering(834) 00:10:38.436 fused_ordering(835) 00:10:38.436 fused_ordering(836) 00:10:38.436 fused_ordering(837) 00:10:38.436 fused_ordering(838) 00:10:38.436 fused_ordering(839) 00:10:38.436 fused_ordering(840) 00:10:38.436 fused_ordering(841) 00:10:38.436 fused_ordering(842) 00:10:38.436 fused_ordering(843) 00:10:38.436 fused_ordering(844) 00:10:38.436 fused_ordering(845) 00:10:38.436 fused_ordering(846) 00:10:38.436 fused_ordering(847) 00:10:38.436 fused_ordering(848) 00:10:38.436 fused_ordering(849) 00:10:38.436 fused_ordering(850) 00:10:38.436 fused_ordering(851) 00:10:38.436 fused_ordering(852) 00:10:38.436 fused_ordering(853) 00:10:38.436 fused_ordering(854) 00:10:38.436 fused_ordering(855) 00:10:38.436 fused_ordering(856) 00:10:38.436 fused_ordering(857) 00:10:38.436 fused_ordering(858) 00:10:38.436 fused_ordering(859) 00:10:38.436 fused_ordering(860) 00:10:38.436 fused_ordering(861) 00:10:38.436 fused_ordering(862) 00:10:38.436 fused_ordering(863) 00:10:38.436 fused_ordering(864) 00:10:38.436 fused_ordering(865) 00:10:38.436 fused_ordering(866) 00:10:38.436 fused_ordering(867) 00:10:38.436 fused_ordering(868) 00:10:38.436 fused_ordering(869) 00:10:38.436 fused_ordering(870) 00:10:38.436 fused_ordering(871) 00:10:38.436 fused_ordering(872) 00:10:38.436 fused_ordering(873) 00:10:38.436 fused_ordering(874) 00:10:38.436 fused_ordering(875) 00:10:38.436 fused_ordering(876) 00:10:38.436 fused_ordering(877) 00:10:38.436 fused_ordering(878) 00:10:38.436 fused_ordering(879) 00:10:38.436 fused_ordering(880) 00:10:38.436 fused_ordering(881) 00:10:38.436 fused_ordering(882) 00:10:38.436 fused_ordering(883) 00:10:38.436 fused_ordering(884) 00:10:38.436 fused_ordering(885) 00:10:38.436 fused_ordering(886) 00:10:38.436 fused_ordering(887) 00:10:38.436 fused_ordering(888) 00:10:38.436 fused_ordering(889) 00:10:38.436 fused_ordering(890) 00:10:38.436 fused_ordering(891) 00:10:38.436 fused_ordering(892) 00:10:38.436 fused_ordering(893) 00:10:38.436 fused_ordering(894) 00:10:38.436 fused_ordering(895) 00:10:38.436 fused_ordering(896) 00:10:38.436 fused_ordering(897) 00:10:38.436 fused_ordering(898) 00:10:38.436 fused_ordering(899) 00:10:38.436 fused_ordering(900) 00:10:38.436 fused_ordering(901) 00:10:38.436 fused_ordering(902) 00:10:38.436 fused_ordering(903) 00:10:38.436 fused_ordering(904) 00:10:38.436 fused_ordering(905) 00:10:38.436 fused_ordering(906) 00:10:38.436 fused_ordering(907) 00:10:38.436 fused_ordering(908) 00:10:38.436 fused_ordering(909) 00:10:38.436 fused_ordering(910) 00:10:38.436 fused_ordering(911) 00:10:38.436 fused_ordering(912) 00:10:38.436 fused_ordering(913) 00:10:38.436 fused_ordering(914) 00:10:38.436 fused_ordering(915) 00:10:38.436 fused_ordering(916) 00:10:38.436 fused_ordering(917) 00:10:38.436 fused_ordering(918) 00:10:38.436 fused_ordering(919) 00:10:38.436 fused_ordering(920) 00:10:38.436 fused_ordering(921) 00:10:38.436 fused_ordering(922) 00:10:38.436 fused_ordering(923) 00:10:38.436 fused_ordering(924) 00:10:38.436 fused_ordering(925) 00:10:38.436 fused_ordering(926) 00:10:38.436 fused_ordering(927) 00:10:38.436 fused_ordering(928) 00:10:38.436 fused_ordering(929) 00:10:38.436 fused_ordering(930) 00:10:38.436 fused_ordering(931) 00:10:38.436 fused_ordering(932) 00:10:38.436 fused_ordering(933) 00:10:38.436 fused_ordering(934) 00:10:38.436 fused_ordering(935) 00:10:38.436 fused_ordering(936) 00:10:38.436 fused_ordering(937) 00:10:38.436 fused_ordering(938) 00:10:38.436 fused_ordering(939) 00:10:38.436 fused_ordering(940) 00:10:38.436 fused_ordering(941) 00:10:38.436 fused_ordering(942) 00:10:38.436 fused_ordering(943) 00:10:38.436 fused_ordering(944) 00:10:38.436 fused_ordering(945) 00:10:38.436 fused_ordering(946) 00:10:38.436 fused_ordering(947) 00:10:38.436 fused_ordering(948) 00:10:38.436 fused_ordering(949) 00:10:38.436 fused_ordering(950) 00:10:38.436 fused_ordering(951) 00:10:38.436 fused_ordering(952) 00:10:38.436 fused_ordering(953) 00:10:38.436 fused_ordering(954) 00:10:38.436 fused_ordering(955) 00:10:38.436 fused_ordering(956) 00:10:38.436 fused_ordering(957) 00:10:38.436 fused_ordering(958) 00:10:38.436 fused_ordering(959) 00:10:38.436 fused_ordering(960) 00:10:38.436 fused_ordering(961) 00:10:38.436 fused_ordering(962) 00:10:38.436 fused_ordering(963) 00:10:38.436 fused_ordering(964) 00:10:38.436 fused_ordering(965) 00:10:38.436 fused_ordering(966) 00:10:38.436 fused_ordering(967) 00:10:38.436 fused_ordering(968) 00:10:38.436 fused_ordering(969) 00:10:38.436 fused_ordering(970) 00:10:38.436 fused_ordering(971) 00:10:38.436 fused_ordering(972) 00:10:38.436 fused_ordering(973) 00:10:38.436 fused_ordering(974) 00:10:38.436 fused_ordering(975) 00:10:38.436 fused_ordering(976) 00:10:38.436 fused_ordering(977) 00:10:38.436 fused_ordering(978) 00:10:38.436 fused_ordering(979) 00:10:38.436 fused_ordering(980) 00:10:38.436 fused_ordering(981) 00:10:38.436 fused_ordering(982) 00:10:38.436 fused_ordering(983) 00:10:38.436 fused_ordering(984) 00:10:38.436 fused_ordering(985) 00:10:38.436 fused_ordering(986) 00:10:38.436 fused_ordering(987) 00:10:38.436 fused_ordering(988) 00:10:38.436 fused_ordering(989) 00:10:38.436 fused_ordering(990) 00:10:38.436 fused_ordering(991) 00:10:38.436 fused_ordering(992) 00:10:38.436 fused_ordering(993) 00:10:38.436 fused_ordering(994) 00:10:38.436 fused_ordering(995) 00:10:38.436 fused_ordering(996) 00:10:38.436 fused_ordering(997) 00:10:38.436 fused_ordering(998) 00:10:38.436 fused_ordering(999) 00:10:38.436 fused_ordering(1000) 00:10:38.436 fused_ordering(1001) 00:10:38.436 fused_ordering(1002) 00:10:38.436 fused_ordering(1003) 00:10:38.436 fused_ordering(1004) 00:10:38.436 fused_ordering(1005) 00:10:38.436 fused_ordering(1006) 00:10:38.436 fused_ordering(1007) 00:10:38.436 fused_ordering(1008) 00:10:38.436 fused_ordering(1009) 00:10:38.436 fused_ordering(1010) 00:10:38.436 fused_ordering(1011) 00:10:38.436 fused_ordering(1012) 00:10:38.436 fused_ordering(1013) 00:10:38.436 fused_ordering(1014) 00:10:38.436 fused_ordering(1015) 00:10:38.436 fused_ordering(1016) 00:10:38.436 fused_ordering(1017) 00:10:38.436 fused_ordering(1018) 00:10:38.436 fused_ordering(1019) 00:10:38.436 fused_ordering(1020) 00:10:38.436 fused_ordering(1021) 00:10:38.436 fused_ordering(1022) 00:10:38.436 fused_ordering(1023) 00:10:38.436 15:47:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:38.436 15:47:07 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.437 rmmod nvme_tcp 00:10:38.437 rmmod nvme_fabrics 00:10:38.437 rmmod nvme_keyring 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4156166 ']' 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4156166 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4156166 ']' 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4156166 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4156166 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4156166' 00:10:38.437 killing process with pid 4156166 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4156166 00:10:38.437 15:47:07 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4156166 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.695 15:47:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.604 15:47:10 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.604 00:10:40.604 real 0m8.060s 00:10:40.604 user 0m5.585s 00:10:40.604 sys 0m3.688s 00:10:40.604 15:47:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.604 15:47:10 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:40.604 ************************************ 00:10:40.604 END TEST nvmf_fused_ordering 00:10:40.604 ************************************ 00:10:40.604 15:47:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:40.604 15:47:10 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:40.604 15:47:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.604 15:47:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.604 15:47:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.605 ************************************ 00:10:40.605 START TEST nvmf_delete_subsystem 00:10:40.605 ************************************ 00:10:40.605 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:40.864 * Looking for test storage... 00:10:40.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.864 15:47:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.764 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:42.765 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:42.765 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:42.765 Found net devices under 0000:09:00.0: cvl_0_0 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:42.765 Found net devices under 0000:09:00.1: cvl_0_1 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.765 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:10:43.023 00:10:43.023 --- 10.0.0.2 ping statistics --- 00:10:43.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.023 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:10:43.023 00:10:43.023 --- 10.0.0.1 ping statistics --- 00:10:43.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.023 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4158521 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4158521 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4158521 ']' 00:10:43.023 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.024 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.024 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.024 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.024 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.024 [2024-07-12 15:47:12.687697] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:10:43.024 [2024-07-12 15:47:12.687772] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.024 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.024 [2024-07-12 15:47:12.750859] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:43.282 [2024-07-12 15:47:12.861336] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.282 [2024-07-12 15:47:12.861388] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.282 [2024-07-12 15:47:12.861403] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.282 [2024-07-12 15:47:12.861418] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.282 [2024-07-12 15:47:12.861428] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.282 [2024-07-12 15:47:12.861481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.282 [2024-07-12 15:47:12.861486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.282 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.282 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:43.282 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:43.282 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:43.282 15:47:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.282 [2024-07-12 15:47:13.004233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.282 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.540 [2024-07-12 15:47:13.020456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.540 NULL1 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.540 Delay0 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4158639 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:43.540 15:47:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:43.540 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.540 [2024-07-12 15:47:13.095159] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:45.436 15:47:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.436 15:47:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.436 15:47:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 [2024-07-12 15:47:15.223805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c390 is same with the state(5) to be set 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 Read completed with error (sct=0, sc=8) 00:10:45.695 Write completed with error (sct=0, sc=8) 00:10:45.695 starting I/O failed: -6 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with starting I/O failed: -6 00:10:45.696 the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with Read completed with error (sct=0, sc=8) 00:10:45.696 the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 [2024-07-12 15:47:15.226141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with Read completed with error (sct=0, sc=8) 00:10:45.696 the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 [2024-07-12 15:47:15.226182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 [2024-07-12 15:47:15.226220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 [2024-07-12 15:47:15.226235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f59b4000c00 is same with the state(5) to be set 00:10:45.696 [2024-07-12 15:47:15.226250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d9a5b0 is same with the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 starting I/O failed: -6 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 [2024-07-12 15:47:15.226776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f59b400d370 is same with the state(5) to be set 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Write completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:45.696 Read completed with error (sct=0, sc=8) 00:10:46.630 [2024-07-12 15:47:16.194356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121da70 is same with the state(5) to be set 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 [2024-07-12 15:47:16.224249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f59b400d020 is same with the state(5) to be set 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 [2024-07-12 15:47:16.227228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121c7a0 is same with the state(5) to be set 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Write completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 Read completed with error (sct=0, sc=8) 00:10:46.630 [2024-07-12 15:47:16.228655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121ce40 is same with the state(5) to be set 00:10:46.630 Initializing NVMe Controllers 00:10:46.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:46.630 Controller IO queue size 128, less than required. 00:10:46.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:46.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:46.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:46.630 Initialization complete. Launching workers. 00:10:46.630 ======================================================== 00:10:46.630 Latency(us) 00:10:46.630 Device Information : IOPS MiB/s Average min max 00:10:46.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 162.49 0.08 926592.49 469.44 2004746.24 00:10:46.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.04 0.08 909123.27 375.23 1997421.16 00:10:46.630 ======================================================== 00:10:46.630 Total : 316.53 0.15 918090.98 375.23 2004746.24 00:10:46.630 00:10:46.630 [2024-07-12 15:47:16.229454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121da70 (9): Bad file descriptor 00:10:46.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:46.630 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.630 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:46.630 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4158639 00:10:46.630 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4158639 00:10:47.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4158639) - No such process 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4158639 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4158639 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4158639 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.198 [2024-07-12 15:47:16.751754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4159065 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:47.198 15:47:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:47.198 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.198 [2024-07-12 15:47:16.816644] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:47.765 15:47:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:47.765 15:47:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:47.765 15:47:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:48.331 15:47:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:48.331 15:47:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:48.331 15:47:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:48.589 15:47:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:48.589 15:47:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:48.589 15:47:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:49.176 15:47:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:49.176 15:47:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:49.176 15:47:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:49.748 15:47:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:49.748 15:47:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:49.748 15:47:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:50.313 15:47:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:50.313 15:47:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:50.313 15:47:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:50.313 Initializing NVMe Controllers 00:10:50.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:50.313 Controller IO queue size 128, less than required. 00:10:50.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:50.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:50.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:50.313 Initialization complete. Launching workers. 00:10:50.313 ======================================================== 00:10:50.313 Latency(us) 00:10:50.313 Device Information : IOPS MiB/s Average min max 00:10:50.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003143.02 1000202.10 1010619.78 00:10:50.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005497.03 1000335.03 1013243.52 00:10:50.313 ======================================================== 00:10:50.313 Total : 256.00 0.12 1004320.03 1000202.10 1013243.52 00:10:50.313 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4159065 00:10:50.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4159065) - No such process 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4159065 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.571 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.571 rmmod nvme_tcp 00:10:50.829 rmmod nvme_fabrics 00:10:50.829 rmmod nvme_keyring 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4158521 ']' 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4158521 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4158521 ']' 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4158521 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4158521 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4158521' 00:10:50.829 killing process with pid 4158521 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4158521 00:10:50.829 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4158521 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.090 15:47:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.000 15:47:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.000 00:10:53.000 real 0m12.387s 00:10:53.000 user 0m27.586s 00:10:53.000 sys 0m3.089s 00:10:53.000 15:47:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.000 15:47:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.000 ************************************ 00:10:53.000 END TEST nvmf_delete_subsystem 00:10:53.000 ************************************ 00:10:53.000 15:47:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:53.000 15:47:22 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:53.000 15:47:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:53.000 15:47:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.000 15:47:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:53.259 ************************************ 00:10:53.259 START TEST nvmf_ns_masking 00:10:53.259 ************************************ 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:53.259 * Looking for test storage... 00:10:53.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=33ed0788-fa25-4ec8-9cc4-38eca5ff8ca2 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=76a46cde-fbd1-4a07-a4c9-f8f03f468fe5 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=88f976b3-34ce-4dd1-b81a-4f959f3ca9d5 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:53.259 15:47:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:55.794 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:55.794 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:55.794 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:55.795 Found net devices under 0000:09:00.0: cvl_0_0 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:55.795 Found net devices under 0000:09:00.1: cvl_0_1 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.795 15:47:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:55.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:10:55.795 00:10:55.795 --- 10.0.0.2 ping statistics --- 00:10:55.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.795 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:10:55.795 00:10:55.795 --- 10.0.0.1 ping statistics --- 00:10:55.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.795 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4161418 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4161418 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4161418 ']' 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:55.795 [2024-07-12 15:47:25.184071] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:10:55.795 [2024-07-12 15:47:25.184141] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.795 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.795 [2024-07-12 15:47:25.246638] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.795 [2024-07-12 15:47:25.348809] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.795 [2024-07-12 15:47:25.348861] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.795 [2024-07-12 15:47:25.348881] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.795 [2024-07-12 15:47:25.348899] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.795 [2024-07-12 15:47:25.348908] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.795 [2024-07-12 15:47:25.348954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:55.795 15:47:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.796 15:47:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:56.054 [2024-07-12 15:47:25.762144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.054 15:47:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:56.312 15:47:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:56.312 15:47:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:56.569 Malloc1 00:10:56.569 15:47:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:56.827 Malloc2 00:10:56.827 15:47:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.086 15:47:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:57.344 15:47:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.602 [2024-07-12 15:47:27.083837] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.602 15:47:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:57.602 15:47:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 88f976b3-34ce-4dd1-b81a-4f959f3ca9d5 -a 10.0.0.2 -s 4420 -i 4 00:10:57.602 15:47:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.602 15:47:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:57.602 15:47:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.602 15:47:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:57.602 15:47:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:00.129 [ 0]:0x1 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bad5c71472414b2a99af6a87219ff7d8 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bad5c71472414b2a99af6a87219ff7d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:00.129 [ 0]:0x1 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bad5c71472414b2a99af6a87219ff7d8 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bad5c71472414b2a99af6a87219ff7d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:00.129 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:00.129 [ 1]:0x2 00:11:00.130 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:00.130 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:00.130 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed81a2894b043e484e7758cd1dd82a5 00:11:00.130 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed81a2894b043e484e7758cd1dd82a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:00.130 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:00.130 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.130 15:47:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.694 15:47:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:00.950 15:47:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:00.951 15:47:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 88f976b3-34ce-4dd1-b81a-4f959f3ca9d5 -a 10.0.0.2 -s 4420 -i 4 00:11:00.951 15:47:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:00.951 15:47:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:00.951 15:47:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.951 15:47:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:00.951 15:47:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:00.951 15:47:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:03.477 [ 0]:0x2 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed81a2894b043e484e7758cd1dd82a5 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed81a2894b043e484e7758cd1dd82a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.477 15:47:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:03.477 [ 0]:0x1 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bad5c71472414b2a99af6a87219ff7d8 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bad5c71472414b2a99af6a87219ff7d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:03.477 [ 1]:0x2 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed81a2894b043e484e7758cd1dd82a5 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed81a2894b043e484e7758cd1dd82a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.477 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:03.734 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:03.734 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:03.734 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:03.734 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:03.734 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.734 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:03.734 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:03.735 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:03.992 [ 0]:0x2 00:11:03.992 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:03.992 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:03.992 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed81a2894b043e484e7758cd1dd82a5 00:11:03.992 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed81a2894b043e484e7758cd1dd82a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:03.992 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:03.992 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.992 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:04.250 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:04.250 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 88f976b3-34ce-4dd1-b81a-4f959f3ca9d5 -a 10.0.0.2 -s 4420 -i 4 00:11:04.507 15:47:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:04.507 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:04.507 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.507 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:04.507 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:04.507 15:47:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:06.404 15:47:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:06.404 15:47:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:06.404 15:47:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.404 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:06.404 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.404 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:06.404 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:06.404 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:06.662 [ 0]:0x1 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bad5c71472414b2a99af6a87219ff7d8 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bad5c71472414b2a99af6a87219ff7d8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:06.662 [ 1]:0x2 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed81a2894b043e484e7758cd1dd82a5 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed81a2894b043e484e7758cd1dd82a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:06.662 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:06.919 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:07.175 [ 0]:0x2 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed81a2894b043e484e7758cd1dd82a5 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed81a2894b043e484e7758cd1dd82a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:07.175 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:07.435 [2024-07-12 15:47:36.928884] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:07.435 request: 00:11:07.435 { 00:11:07.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:07.435 "nsid": 2, 00:11:07.435 "host": "nqn.2016-06.io.spdk:host1", 00:11:07.435 "method": "nvmf_ns_remove_host", 00:11:07.435 "req_id": 1 00:11:07.435 } 00:11:07.435 Got JSON-RPC error response 00:11:07.435 response: 00:11:07.435 { 00:11:07.435 "code": -32602, 00:11:07.435 "message": "Invalid parameters" 00:11:07.435 } 00:11:07.435 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:07.435 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:07.435 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:07.436 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:07.436 [ 0]:0x2 00:11:07.436 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:07.436 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:07.436 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed81a2894b043e484e7758cd1dd82a5 00:11:07.436 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed81a2894b043e484e7758cd1dd82a5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:07.436 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:07.436 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4163039 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4163039 /var/tmp/host.sock 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4163039 ']' 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:07.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.734 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:07.734 [2024-07-12 15:47:37.253724] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:11:07.734 [2024-07-12 15:47:37.253795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4163039 ] 00:11:07.734 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.734 [2024-07-12 15:47:37.310271] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.734 [2024-07-12 15:47:37.415310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.992 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.992 15:47:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:07.992 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.249 15:47:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:08.506 15:47:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 33ed0788-fa25-4ec8-9cc4-38eca5ff8ca2 00:11:08.506 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:08.506 15:47:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 33ED0788FA254EC89CC438ECA5FF8CA2 -i 00:11:08.763 15:47:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 76a46cde-fbd1-4a07-a4c9-f8f03f468fe5 00:11:08.763 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:08.763 15:47:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 76A46CDEFBD14A07A4C9F8F03F468FE5 -i 00:11:09.020 15:47:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:09.277 15:47:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:09.535 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:09.535 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:10.101 nvme0n1 00:11:10.101 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:10.101 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:10.359 nvme1n2 00:11:10.359 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:10.359 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:10.359 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:10.359 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:10.359 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:10.616 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:10.616 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:10.616 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:10.616 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:10.874 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 33ed0788-fa25-4ec8-9cc4-38eca5ff8ca2 == \3\3\e\d\0\7\8\8\-\f\a\2\5\-\4\e\c\8\-\9\c\c\4\-\3\8\e\c\a\5\f\f\8\c\a\2 ]] 00:11:10.874 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:10.874 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:10.874 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 76a46cde-fbd1-4a07-a4c9-f8f03f468fe5 == \7\6\a\4\6\c\d\e\-\f\b\d\1\-\4\a\0\7\-\a\4\c\9\-\f\8\f\0\3\f\4\6\8\f\e\5 ]] 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4163039 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4163039 ']' 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4163039 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4163039 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4163039' 00:11:11.132 killing process with pid 4163039 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4163039 00:11:11.132 15:47:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4163039 00:11:11.389 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:11.647 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:11.647 rmmod nvme_tcp 00:11:11.905 rmmod nvme_fabrics 00:11:11.905 rmmod nvme_keyring 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4161418 ']' 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4161418 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4161418 ']' 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4161418 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4161418 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4161418' 00:11:11.905 killing process with pid 4161418 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4161418 00:11:11.905 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4161418 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:12.165 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.702 15:47:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:14.703 00:11:14.703 real 0m21.080s 00:11:14.703 user 0m27.038s 00:11:14.703 sys 0m4.159s 00:11:14.703 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.703 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:14.703 ************************************ 00:11:14.703 END TEST nvmf_ns_masking 00:11:14.703 ************************************ 00:11:14.703 15:47:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:14.703 15:47:43 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:14.703 15:47:43 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:14.703 15:47:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:14.703 15:47:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.703 15:47:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.703 ************************************ 00:11:14.703 START TEST nvmf_nvme_cli 00:11:14.703 ************************************ 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:14.703 * Looking for test storage... 00:11:14.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:14.703 15:47:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:16.608 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:16.608 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:16.608 Found net devices under 0000:09:00.0: cvl_0_0 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:16.608 Found net devices under 0000:09:00.1: cvl_0_1 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.608 15:47:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:16.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:11:16.608 00:11:16.608 --- 10.0.0.2 ping statistics --- 00:11:16.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.608 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:11:16.608 00:11:16.608 --- 10.0.0.1 ping statistics --- 00:11:16.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.608 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4165537 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4165537 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4165537 ']' 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.608 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.608 [2024-07-12 15:47:46.201724] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:11:16.609 [2024-07-12 15:47:46.201812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.609 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.609 [2024-07-12 15:47:46.264775] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.866 [2024-07-12 15:47:46.370759] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.866 [2024-07-12 15:47:46.370806] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.866 [2024-07-12 15:47:46.370835] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.866 [2024-07-12 15:47:46.370848] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.866 [2024-07-12 15:47:46.370858] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.866 [2024-07-12 15:47:46.371616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.866 [2024-07-12 15:47:46.371660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.866 [2024-07-12 15:47:46.371730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.866 [2024-07-12 15:47:46.371721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.866 [2024-07-12 15:47:46.536348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:16.866 Malloc0 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.866 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 Malloc1 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 [2024-07-12 15:47:46.623818] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:11:17.124 00:11:17.124 Discovery Log Number of Records 2, Generation counter 2 00:11:17.124 =====Discovery Log Entry 0====== 00:11:17.124 trtype: tcp 00:11:17.124 adrfam: ipv4 00:11:17.124 subtype: current discovery subsystem 00:11:17.124 treq: not required 00:11:17.124 portid: 0 00:11:17.124 trsvcid: 4420 00:11:17.124 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:17.124 traddr: 10.0.0.2 00:11:17.124 eflags: explicit discovery connections, duplicate discovery information 00:11:17.124 sectype: none 00:11:17.124 =====Discovery Log Entry 1====== 00:11:17.124 trtype: tcp 00:11:17.124 adrfam: ipv4 00:11:17.124 subtype: nvme subsystem 00:11:17.124 treq: not required 00:11:17.124 portid: 0 00:11:17.124 trsvcid: 4420 00:11:17.124 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:17.124 traddr: 10.0.0.2 00:11:17.124 eflags: none 00:11:17.124 sectype: none 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:17.124 15:47:46 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.691 15:47:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:17.691 15:47:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:17.691 15:47:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.691 15:47:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:17.691 15:47:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:17.691 15:47:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:20.220 /dev/nvme0n1 ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:20.220 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:20.221 rmmod nvme_tcp 00:11:20.221 rmmod nvme_fabrics 00:11:20.221 rmmod nvme_keyring 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4165537 ']' 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4165537 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4165537 ']' 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4165537 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4165537 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4165537' 00:11:20.221 killing process with pid 4165537 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4165537 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4165537 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.221 15:47:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.755 15:47:51 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:22.755 00:11:22.755 real 0m8.123s 00:11:22.755 user 0m14.499s 00:11:22.755 sys 0m2.246s 00:11:22.755 15:47:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.755 15:47:51 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 ************************************ 00:11:22.755 END TEST nvmf_nvme_cli 00:11:22.755 ************************************ 00:11:22.755 15:47:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:22.755 15:47:52 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:22.755 15:47:52 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:22.755 15:47:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:22.755 15:47:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.755 15:47:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:22.755 ************************************ 00:11:22.755 START TEST nvmf_vfio_user 00:11:22.755 ************************************ 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:22.755 * Looking for test storage... 00:11:22.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.755 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4166339 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4166339' 00:11:22.756 Process pid: 4166339 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4166339 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4166339 ']' 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:22.756 [2024-07-12 15:47:52.159246] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:11:22.756 [2024-07-12 15:47:52.159343] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.756 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.756 [2024-07-12 15:47:52.217480] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.756 [2024-07-12 15:47:52.325545] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.756 [2024-07-12 15:47:52.325596] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.756 [2024-07-12 15:47:52.325624] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.756 [2024-07-12 15:47:52.325635] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.756 [2024-07-12 15:47:52.325646] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.756 [2024-07-12 15:47:52.325697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.756 [2024-07-12 15:47:52.325755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.756 [2024-07-12 15:47:52.325822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.756 [2024-07-12 15:47:52.325824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:22.756 15:47:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:24.127 15:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:24.127 15:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:24.127 15:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:24.127 15:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:24.127 15:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:24.127 15:47:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:24.385 Malloc1 00:11:24.385 15:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:24.642 15:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:24.900 15:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:25.162 15:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:25.162 15:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:25.162 15:47:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:25.458 Malloc2 00:11:25.458 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:25.715 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:25.972 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:26.232 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:26.232 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:26.232 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:26.232 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:26.232 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:26.232 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:26.232 [2024-07-12 15:47:55.843629] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:11:26.232 [2024-07-12 15:47:55.843668] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4166765 ] 00:11:26.232 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.232 [2024-07-12 15:47:55.875503] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:26.232 [2024-07-12 15:47:55.885343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:26.232 [2024-07-12 15:47:55.885372] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fddc1602000 00:11:26.232 [2024-07-12 15:47:55.886334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.887329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.888336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.889340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.890340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.891360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.892353] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.893358] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:26.232 [2024-07-12 15:47:55.894363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:26.232 [2024-07-12 15:47:55.894385] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fddc15f7000 00:11:26.232 [2024-07-12 15:47:55.895595] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:26.232 [2024-07-12 15:47:55.911847] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:26.232 [2024-07-12 15:47:55.911884] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:26.232 [2024-07-12 15:47:55.916499] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:26.232 [2024-07-12 15:47:55.916556] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:26.232 [2024-07-12 15:47:55.916662] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:26.232 [2024-07-12 15:47:55.916688] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:26.232 [2024-07-12 15:47:55.916699] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:26.232 [2024-07-12 15:47:55.917493] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:26.232 [2024-07-12 15:47:55.917514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:26.232 [2024-07-12 15:47:55.917526] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:26.232 [2024-07-12 15:47:55.918496] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:26.232 [2024-07-12 15:47:55.918514] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:26.232 [2024-07-12 15:47:55.918534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:26.232 [2024-07-12 15:47:55.919499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:26.232 [2024-07-12 15:47:55.919518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:26.232 [2024-07-12 15:47:55.920504] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:26.232 [2024-07-12 15:47:55.920523] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:26.232 [2024-07-12 15:47:55.920532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:26.232 [2024-07-12 15:47:55.920544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:26.232 [2024-07-12 15:47:55.920653] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:26.232 [2024-07-12 15:47:55.920662] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:26.232 [2024-07-12 15:47:55.920670] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:26.232 [2024-07-12 15:47:55.921509] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:26.232 [2024-07-12 15:47:55.922515] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:26.232 [2024-07-12 15:47:55.923523] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:26.232 [2024-07-12 15:47:55.924515] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:26.232 [2024-07-12 15:47:55.924626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:26.232 [2024-07-12 15:47:55.925535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:26.232 [2024-07-12 15:47:55.925553] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:26.233 [2024-07-12 15:47:55.925562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925587] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:26.233 [2024-07-12 15:47:55.925623] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925648] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:26.233 [2024-07-12 15:47:55.925658] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.233 [2024-07-12 15:47:55.925690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.925747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.925762] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:26.233 [2024-07-12 15:47:55.925774] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:26.233 [2024-07-12 15:47:55.925781] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:26.233 [2024-07-12 15:47:55.925789] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:26.233 [2024-07-12 15:47:55.925796] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:26.233 [2024-07-12 15:47:55.925804] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:26.233 [2024-07-12 15:47:55.925811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.925859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.925874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.233 [2024-07-12 15:47:55.925886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.233 [2024-07-12 15:47:55.925897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.233 [2024-07-12 15:47:55.925908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.233 [2024-07-12 15:47:55.925916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.925955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.925965] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:26.233 [2024-07-12 15:47:55.925973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.925997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926114] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:26.233 [2024-07-12 15:47:55.926122] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:26.233 [2024-07-12 15:47:55.926132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926162] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:26.233 [2024-07-12 15:47:55.926179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926204] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:26.233 [2024-07-12 15:47:55.926212] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.233 [2024-07-12 15:47:55.926221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926292] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:26.233 [2024-07-12 15:47:55.926325] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.233 [2024-07-12 15:47:55.926342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926454] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:26.233 [2024-07-12 15:47:55.926461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:26.233 [2024-07-12 15:47:55.926473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:26.233 [2024-07-12 15:47:55.926499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:26.233 [2024-07-12 15:47:55.926644] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:26.233 [2024-07-12 15:47:55.926654] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:26.233 [2024-07-12 15:47:55.926660] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:26.233 [2024-07-12 15:47:55.926666] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:26.233 [2024-07-12 15:47:55.926691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:26.233 [2024-07-12 15:47:55.926702] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:26.233 [2024-07-12 15:47:55.926710] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:26.233 [2024-07-12 15:47:55.926718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:26.233 [2024-07-12 15:47:55.926729] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:26.234 [2024-07-12 15:47:55.926736] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:26.234 [2024-07-12 15:47:55.926745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:26.234 [2024-07-12 15:47:55.926756] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:26.234 [2024-07-12 15:47:55.926764] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:26.234 [2024-07-12 15:47:55.926772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:26.234 [2024-07-12 15:47:55.926784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:26.234 [2024-07-12 15:47:55.926803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:26.234 [2024-07-12 15:47:55.926821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:26.234 [2024-07-12 15:47:55.926833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:26.234 ===================================================== 00:11:26.234 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:26.234 ===================================================== 00:11:26.234 Controller Capabilities/Features 00:11:26.234 ================================ 00:11:26.234 Vendor ID: 4e58 00:11:26.234 Subsystem Vendor ID: 4e58 00:11:26.234 Serial Number: SPDK1 00:11:26.234 Model Number: SPDK bdev Controller 00:11:26.234 Firmware Version: 24.09 00:11:26.234 Recommended Arb Burst: 6 00:11:26.234 IEEE OUI Identifier: 8d 6b 50 00:11:26.234 Multi-path I/O 00:11:26.234 May have multiple subsystem ports: Yes 00:11:26.234 May have multiple controllers: Yes 00:11:26.234 Associated with SR-IOV VF: No 00:11:26.234 Max Data Transfer Size: 131072 00:11:26.234 Max Number of Namespaces: 32 00:11:26.234 Max Number of I/O Queues: 127 00:11:26.234 NVMe Specification Version (VS): 1.3 00:11:26.234 NVMe Specification Version (Identify): 1.3 00:11:26.234 Maximum Queue Entries: 256 00:11:26.234 Contiguous Queues Required: Yes 00:11:26.234 Arbitration Mechanisms Supported 00:11:26.234 Weighted Round Robin: Not Supported 00:11:26.234 Vendor Specific: Not Supported 00:11:26.234 Reset Timeout: 15000 ms 00:11:26.234 Doorbell Stride: 4 bytes 00:11:26.234 NVM Subsystem Reset: Not Supported 00:11:26.234 Command Sets Supported 00:11:26.234 NVM Command Set: Supported 00:11:26.234 Boot Partition: Not Supported 00:11:26.234 Memory Page Size Minimum: 4096 bytes 00:11:26.234 Memory Page Size Maximum: 4096 bytes 00:11:26.234 Persistent Memory Region: Not Supported 00:11:26.234 Optional Asynchronous Events Supported 00:11:26.234 Namespace Attribute Notices: Supported 00:11:26.234 Firmware Activation Notices: Not Supported 00:11:26.234 ANA Change Notices: Not Supported 00:11:26.234 PLE Aggregate Log Change Notices: Not Supported 00:11:26.234 LBA Status Info Alert Notices: Not Supported 00:11:26.234 EGE Aggregate Log Change Notices: Not Supported 00:11:26.234 Normal NVM Subsystem Shutdown event: Not Supported 00:11:26.234 Zone Descriptor Change Notices: Not Supported 00:11:26.234 Discovery Log Change Notices: Not Supported 00:11:26.234 Controller Attributes 00:11:26.234 128-bit Host Identifier: Supported 00:11:26.234 Non-Operational Permissive Mode: Not Supported 00:11:26.234 NVM Sets: Not Supported 00:11:26.234 Read Recovery Levels: Not Supported 00:11:26.234 Endurance Groups: Not Supported 00:11:26.234 Predictable Latency Mode: Not Supported 00:11:26.234 Traffic Based Keep ALive: Not Supported 00:11:26.234 Namespace Granularity: Not Supported 00:11:26.234 SQ Associations: Not Supported 00:11:26.234 UUID List: Not Supported 00:11:26.234 Multi-Domain Subsystem: Not Supported 00:11:26.234 Fixed Capacity Management: Not Supported 00:11:26.234 Variable Capacity Management: Not Supported 00:11:26.234 Delete Endurance Group: Not Supported 00:11:26.234 Delete NVM Set: Not Supported 00:11:26.234 Extended LBA Formats Supported: Not Supported 00:11:26.234 Flexible Data Placement Supported: Not Supported 00:11:26.234 00:11:26.234 Controller Memory Buffer Support 00:11:26.234 ================================ 00:11:26.234 Supported: No 00:11:26.234 00:11:26.234 Persistent Memory Region Support 00:11:26.234 ================================ 00:11:26.234 Supported: No 00:11:26.234 00:11:26.234 Admin Command Set Attributes 00:11:26.234 ============================ 00:11:26.234 Security Send/Receive: Not Supported 00:11:26.234 Format NVM: Not Supported 00:11:26.234 Firmware Activate/Download: Not Supported 00:11:26.234 Namespace Management: Not Supported 00:11:26.234 Device Self-Test: Not Supported 00:11:26.234 Directives: Not Supported 00:11:26.234 NVMe-MI: Not Supported 00:11:26.234 Virtualization Management: Not Supported 00:11:26.234 Doorbell Buffer Config: Not Supported 00:11:26.234 Get LBA Status Capability: Not Supported 00:11:26.234 Command & Feature Lockdown Capability: Not Supported 00:11:26.234 Abort Command Limit: 4 00:11:26.234 Async Event Request Limit: 4 00:11:26.234 Number of Firmware Slots: N/A 00:11:26.234 Firmware Slot 1 Read-Only: N/A 00:11:26.234 Firmware Activation Without Reset: N/A 00:11:26.234 Multiple Update Detection Support: N/A 00:11:26.234 Firmware Update Granularity: No Information Provided 00:11:26.234 Per-Namespace SMART Log: No 00:11:26.234 Asymmetric Namespace Access Log Page: Not Supported 00:11:26.234 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:26.234 Command Effects Log Page: Supported 00:11:26.234 Get Log Page Extended Data: Supported 00:11:26.234 Telemetry Log Pages: Not Supported 00:11:26.234 Persistent Event Log Pages: Not Supported 00:11:26.234 Supported Log Pages Log Page: May Support 00:11:26.234 Commands Supported & Effects Log Page: Not Supported 00:11:26.234 Feature Identifiers & Effects Log Page:May Support 00:11:26.234 NVMe-MI Commands & Effects Log Page: May Support 00:11:26.234 Data Area 4 for Telemetry Log: Not Supported 00:11:26.234 Error Log Page Entries Supported: 128 00:11:26.234 Keep Alive: Supported 00:11:26.234 Keep Alive Granularity: 10000 ms 00:11:26.234 00:11:26.234 NVM Command Set Attributes 00:11:26.234 ========================== 00:11:26.234 Submission Queue Entry Size 00:11:26.234 Max: 64 00:11:26.234 Min: 64 00:11:26.234 Completion Queue Entry Size 00:11:26.234 Max: 16 00:11:26.234 Min: 16 00:11:26.234 Number of Namespaces: 32 00:11:26.234 Compare Command: Supported 00:11:26.234 Write Uncorrectable Command: Not Supported 00:11:26.234 Dataset Management Command: Supported 00:11:26.234 Write Zeroes Command: Supported 00:11:26.234 Set Features Save Field: Not Supported 00:11:26.234 Reservations: Not Supported 00:11:26.234 Timestamp: Not Supported 00:11:26.234 Copy: Supported 00:11:26.234 Volatile Write Cache: Present 00:11:26.234 Atomic Write Unit (Normal): 1 00:11:26.234 Atomic Write Unit (PFail): 1 00:11:26.234 Atomic Compare & Write Unit: 1 00:11:26.234 Fused Compare & Write: Supported 00:11:26.234 Scatter-Gather List 00:11:26.234 SGL Command Set: Supported (Dword aligned) 00:11:26.234 SGL Keyed: Not Supported 00:11:26.234 SGL Bit Bucket Descriptor: Not Supported 00:11:26.234 SGL Metadata Pointer: Not Supported 00:11:26.234 Oversized SGL: Not Supported 00:11:26.234 SGL Metadata Address: Not Supported 00:11:26.234 SGL Offset: Not Supported 00:11:26.234 Transport SGL Data Block: Not Supported 00:11:26.234 Replay Protected Memory Block: Not Supported 00:11:26.234 00:11:26.234 Firmware Slot Information 00:11:26.234 ========================= 00:11:26.234 Active slot: 1 00:11:26.234 Slot 1 Firmware Revision: 24.09 00:11:26.234 00:11:26.234 00:11:26.234 Commands Supported and Effects 00:11:26.234 ============================== 00:11:26.234 Admin Commands 00:11:26.234 -------------- 00:11:26.234 Get Log Page (02h): Supported 00:11:26.234 Identify (06h): Supported 00:11:26.234 Abort (08h): Supported 00:11:26.234 Set Features (09h): Supported 00:11:26.234 Get Features (0Ah): Supported 00:11:26.234 Asynchronous Event Request (0Ch): Supported 00:11:26.234 Keep Alive (18h): Supported 00:11:26.234 I/O Commands 00:11:26.234 ------------ 00:11:26.234 Flush (00h): Supported LBA-Change 00:11:26.234 Write (01h): Supported LBA-Change 00:11:26.234 Read (02h): Supported 00:11:26.234 Compare (05h): Supported 00:11:26.234 Write Zeroes (08h): Supported LBA-Change 00:11:26.234 Dataset Management (09h): Supported LBA-Change 00:11:26.234 Copy (19h): Supported LBA-Change 00:11:26.235 00:11:26.235 Error Log 00:11:26.235 ========= 00:11:26.235 00:11:26.235 Arbitration 00:11:26.235 =========== 00:11:26.235 Arbitration Burst: 1 00:11:26.235 00:11:26.235 Power Management 00:11:26.235 ================ 00:11:26.235 Number of Power States: 1 00:11:26.235 Current Power State: Power State #0 00:11:26.235 Power State #0: 00:11:26.235 Max Power: 0.00 W 00:11:26.235 Non-Operational State: Operational 00:11:26.235 Entry Latency: Not Reported 00:11:26.235 Exit Latency: Not Reported 00:11:26.235 Relative Read Throughput: 0 00:11:26.235 Relative Read Latency: 0 00:11:26.235 Relative Write Throughput: 0 00:11:26.235 Relative Write Latency: 0 00:11:26.235 Idle Power: Not Reported 00:11:26.235 Active Power: Not Reported 00:11:26.235 Non-Operational Permissive Mode: Not Supported 00:11:26.235 00:11:26.235 Health Information 00:11:26.235 ================== 00:11:26.235 Critical Warnings: 00:11:26.235 Available Spare Space: OK 00:11:26.235 Temperature: OK 00:11:26.235 Device Reliability: OK 00:11:26.235 Read Only: No 00:11:26.235 Volatile Memory Backup: OK 00:11:26.235 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:26.235 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:26.235 Available Spare: 0% 00:11:26.235 Available Sp[2024-07-12 15:47:55.926951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:26.235 [2024-07-12 15:47:55.926969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:26.235 [2024-07-12 15:47:55.927010] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:26.235 [2024-07-12 15:47:55.927026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.235 [2024-07-12 15:47:55.927037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.235 [2024-07-12 15:47:55.927046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.235 [2024-07-12 15:47:55.927056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.235 [2024-07-12 15:47:55.928329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:26.235 [2024-07-12 15:47:55.928351] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:26.235 [2024-07-12 15:47:55.928556] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:26.235 [2024-07-12 15:47:55.928646] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:26.235 [2024-07-12 15:47:55.928675] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:26.235 [2024-07-12 15:47:55.929562] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:26.235 [2024-07-12 15:47:55.929585] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:26.235 [2024-07-12 15:47:55.929657] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:26.235 [2024-07-12 15:47:55.933341] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:26.493 are Threshold: 0% 00:11:26.493 Life Percentage Used: 0% 00:11:26.493 Data Units Read: 0 00:11:26.493 Data Units Written: 0 00:11:26.493 Host Read Commands: 0 00:11:26.493 Host Write Commands: 0 00:11:26.493 Controller Busy Time: 0 minutes 00:11:26.493 Power Cycles: 0 00:11:26.493 Power On Hours: 0 hours 00:11:26.493 Unsafe Shutdowns: 0 00:11:26.493 Unrecoverable Media Errors: 0 00:11:26.493 Lifetime Error Log Entries: 0 00:11:26.493 Warning Temperature Time: 0 minutes 00:11:26.493 Critical Temperature Time: 0 minutes 00:11:26.493 00:11:26.493 Number of Queues 00:11:26.493 ================ 00:11:26.493 Number of I/O Submission Queues: 127 00:11:26.493 Number of I/O Completion Queues: 127 00:11:26.493 00:11:26.493 Active Namespaces 00:11:26.493 ================= 00:11:26.493 Namespace ID:1 00:11:26.493 Error Recovery Timeout: Unlimited 00:11:26.493 Command Set Identifier: NVM (00h) 00:11:26.493 Deallocate: Supported 00:11:26.493 Deallocated/Unwritten Error: Not Supported 00:11:26.493 Deallocated Read Value: Unknown 00:11:26.493 Deallocate in Write Zeroes: Not Supported 00:11:26.493 Deallocated Guard Field: 0xFFFF 00:11:26.493 Flush: Supported 00:11:26.493 Reservation: Supported 00:11:26.493 Namespace Sharing Capabilities: Multiple Controllers 00:11:26.493 Size (in LBAs): 131072 (0GiB) 00:11:26.493 Capacity (in LBAs): 131072 (0GiB) 00:11:26.493 Utilization (in LBAs): 131072 (0GiB) 00:11:26.493 NGUID: 68BF07FE5FB04124BAD5FDAEB80F808B 00:11:26.493 UUID: 68bf07fe-5fb0-4124-bad5-fdaeb80f808b 00:11:26.493 Thin Provisioning: Not Supported 00:11:26.493 Per-NS Atomic Units: Yes 00:11:26.493 Atomic Boundary Size (Normal): 0 00:11:26.493 Atomic Boundary Size (PFail): 0 00:11:26.493 Atomic Boundary Offset: 0 00:11:26.493 Maximum Single Source Range Length: 65535 00:11:26.493 Maximum Copy Length: 65535 00:11:26.493 Maximum Source Range Count: 1 00:11:26.493 NGUID/EUI64 Never Reused: No 00:11:26.493 Namespace Write Protected: No 00:11:26.493 Number of LBA Formats: 1 00:11:26.493 Current LBA Format: LBA Format #00 00:11:26.493 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:26.493 00:11:26.493 15:47:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:26.493 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.493 [2024-07-12 15:47:56.175217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:31.758 Initializing NVMe Controllers 00:11:31.758 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:31.758 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:31.758 Initialization complete. Launching workers. 00:11:31.758 ======================================================== 00:11:31.758 Latency(us) 00:11:31.758 Device Information : IOPS MiB/s Average min max 00:11:31.758 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34649.27 135.35 3693.51 1172.55 7584.21 00:11:31.758 ======================================================== 00:11:31.758 Total : 34649.27 135.35 3693.51 1172.55 7584.21 00:11:31.758 00:11:31.758 [2024-07-12 15:48:01.199935] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:31.758 15:48:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:31.758 EAL: No free 2048 kB hugepages reported on node 1 00:11:31.758 [2024-07-12 15:48:01.441059] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:37.019 Initializing NVMe Controllers 00:11:37.019 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:37.019 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:37.019 Initialization complete. Launching workers. 00:11:37.019 ======================================================== 00:11:37.019 Latency(us) 00:11:37.019 Device Information : IOPS MiB/s Average min max 00:11:37.019 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15967.20 62.37 8026.42 5948.71 15946.99 00:11:37.019 ======================================================== 00:11:37.019 Total : 15967.20 62.37 8026.42 5948.71 15946.99 00:11:37.019 00:11:37.019 [2024-07-12 15:48:06.476823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:37.019 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:37.019 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.019 [2024-07-12 15:48:06.678880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:42.273 [2024-07-12 15:48:11.750647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:42.273 Initializing NVMe Controllers 00:11:42.273 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:42.273 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:42.273 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:42.273 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:42.273 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:42.273 Initialization complete. Launching workers. 00:11:42.273 Starting thread on core 2 00:11:42.273 Starting thread on core 3 00:11:42.273 Starting thread on core 1 00:11:42.273 15:48:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:42.273 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.531 [2024-07-12 15:48:12.058801] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.820 [2024-07-12 15:48:15.120926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.820 Initializing NVMe Controllers 00:11:45.820 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.820 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.820 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:45.820 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:45.820 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:45.820 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:45.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:45.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:45.821 Initialization complete. Launching workers. 00:11:45.821 Starting thread on core 1 with urgent priority queue 00:11:45.821 Starting thread on core 2 with urgent priority queue 00:11:45.821 Starting thread on core 3 with urgent priority queue 00:11:45.821 Starting thread on core 0 with urgent priority queue 00:11:45.821 SPDK bdev Controller (SPDK1 ) core 0: 4596.33 IO/s 21.76 secs/100000 ios 00:11:45.821 SPDK bdev Controller (SPDK1 ) core 1: 5014.67 IO/s 19.94 secs/100000 ios 00:11:45.821 SPDK bdev Controller (SPDK1 ) core 2: 4944.67 IO/s 20.22 secs/100000 ios 00:11:45.821 SPDK bdev Controller (SPDK1 ) core 3: 5204.33 IO/s 19.21 secs/100000 ios 00:11:45.821 ======================================================== 00:11:45.821 00:11:45.821 15:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:45.821 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.821 [2024-07-12 15:48:15.424849] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:45.821 Initializing NVMe Controllers 00:11:45.821 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.821 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:45.821 Namespace ID: 1 size: 0GB 00:11:45.821 Initialization complete. 00:11:45.821 INFO: using host memory buffer for IO 00:11:45.821 Hello world! 00:11:45.821 [2024-07-12 15:48:15.459402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:45.821 15:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:46.078 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.078 [2024-07-12 15:48:15.763852] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:47.452 Initializing NVMe Controllers 00:11:47.452 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.452 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.452 Initialization complete. Launching workers. 00:11:47.452 submit (in ns) avg, min, max = 6524.5, 3584.4, 4014055.6 00:11:47.452 complete (in ns) avg, min, max = 26731.4, 2063.3, 4015598.9 00:11:47.452 00:11:47.452 Submit histogram 00:11:47.452 ================ 00:11:47.452 Range in us Cumulative Count 00:11:47.452 3.579 - 3.603: 0.2125% ( 27) 00:11:47.452 3.603 - 3.627: 2.6214% ( 306) 00:11:47.452 3.627 - 3.650: 9.0530% ( 817) 00:11:47.452 3.650 - 3.674: 18.7436% ( 1231) 00:11:47.452 3.674 - 3.698: 30.1818% ( 1453) 00:11:47.452 3.698 - 3.721: 39.8410% ( 1227) 00:11:47.452 3.721 - 3.745: 46.0206% ( 785) 00:11:47.452 3.745 - 3.769: 50.9250% ( 623) 00:11:47.452 3.769 - 3.793: 55.6247% ( 597) 00:11:47.452 3.793 - 3.816: 59.5844% ( 503) 00:11:47.452 3.816 - 3.840: 62.3317% ( 349) 00:11:47.452 3.840 - 3.864: 65.5751% ( 412) 00:11:47.452 3.864 - 3.887: 69.8811% ( 547) 00:11:47.452 3.887 - 3.911: 74.6280% ( 603) 00:11:47.452 3.911 - 3.935: 79.5481% ( 625) 00:11:47.452 3.935 - 3.959: 83.3268% ( 480) 00:11:47.452 3.959 - 3.982: 85.9167% ( 329) 00:11:47.452 3.982 - 4.006: 87.8296% ( 243) 00:11:47.452 4.006 - 4.030: 89.4277% ( 203) 00:11:47.452 4.030 - 4.053: 90.5298% ( 140) 00:11:47.452 4.053 - 4.077: 91.5296% ( 127) 00:11:47.452 4.077 - 4.101: 92.5766% ( 133) 00:11:47.452 4.101 - 4.124: 93.4425% ( 110) 00:11:47.452 4.124 - 4.148: 94.4265% ( 125) 00:11:47.452 4.148 - 4.172: 95.0720% ( 82) 00:11:47.452 4.172 - 4.196: 95.5444% ( 60) 00:11:47.452 4.196 - 4.219: 95.9616% ( 53) 00:11:47.452 4.219 - 4.243: 96.2843% ( 41) 00:11:47.452 4.243 - 4.267: 96.4575% ( 22) 00:11:47.452 4.267 - 4.290: 96.6386% ( 23) 00:11:47.452 4.290 - 4.314: 96.7409% ( 13) 00:11:47.452 4.314 - 4.338: 96.9141% ( 22) 00:11:47.452 4.338 - 4.361: 97.0243% ( 14) 00:11:47.452 4.361 - 4.385: 97.0873% ( 8) 00:11:47.452 4.385 - 4.409: 97.1109% ( 3) 00:11:47.452 4.409 - 4.433: 97.1660% ( 7) 00:11:47.452 4.433 - 4.456: 97.2054% ( 5) 00:11:47.452 4.456 - 4.480: 97.2526% ( 6) 00:11:47.452 4.480 - 4.504: 97.3077% ( 7) 00:11:47.452 4.504 - 4.527: 97.3392% ( 4) 00:11:47.452 4.527 - 4.551: 97.3628% ( 3) 00:11:47.452 4.551 - 4.575: 97.3707% ( 1) 00:11:47.452 4.575 - 4.599: 97.3786% ( 1) 00:11:47.452 4.622 - 4.646: 97.3864% ( 1) 00:11:47.452 4.646 - 4.670: 97.4022% ( 2) 00:11:47.453 4.670 - 4.693: 97.4337% ( 4) 00:11:47.453 4.693 - 4.717: 97.4809% ( 6) 00:11:47.453 4.717 - 4.741: 97.4888% ( 1) 00:11:47.453 4.741 - 4.764: 97.4967% ( 1) 00:11:47.453 4.764 - 4.788: 97.5439% ( 6) 00:11:47.453 4.788 - 4.812: 97.5675% ( 3) 00:11:47.453 4.812 - 4.836: 97.6069% ( 5) 00:11:47.453 4.836 - 4.859: 97.6620% ( 7) 00:11:47.453 4.859 - 4.883: 97.7013% ( 5) 00:11:47.453 4.883 - 4.907: 97.7328% ( 4) 00:11:47.453 4.907 - 4.930: 97.8037% ( 9) 00:11:47.453 4.930 - 4.954: 97.8509% ( 6) 00:11:47.453 4.954 - 4.978: 97.9060% ( 7) 00:11:47.453 4.978 - 5.001: 97.9296% ( 3) 00:11:47.453 5.001 - 5.025: 97.9611% ( 4) 00:11:47.453 5.025 - 5.049: 97.9926% ( 4) 00:11:47.453 5.049 - 5.073: 98.0162% ( 3) 00:11:47.453 5.073 - 5.096: 98.0477% ( 4) 00:11:47.453 5.096 - 5.120: 98.1028% ( 7) 00:11:47.453 5.120 - 5.144: 98.1107% ( 1) 00:11:47.453 5.144 - 5.167: 98.1343% ( 3) 00:11:47.453 5.167 - 5.191: 98.1422% ( 1) 00:11:47.453 5.191 - 5.215: 98.1500% ( 1) 00:11:47.453 5.215 - 5.239: 98.1579% ( 1) 00:11:47.453 5.239 - 5.262: 98.1658% ( 1) 00:11:47.453 5.262 - 5.286: 98.1737% ( 1) 00:11:47.453 5.310 - 5.333: 98.1894% ( 2) 00:11:47.453 5.452 - 5.476: 98.2051% ( 2) 00:11:47.453 5.476 - 5.499: 98.2130% ( 1) 00:11:47.453 5.523 - 5.547: 98.2288% ( 2) 00:11:47.453 5.570 - 5.594: 98.2366% ( 1) 00:11:47.453 5.665 - 5.689: 98.2445% ( 1) 00:11:47.453 5.689 - 5.713: 98.2524% ( 1) 00:11:47.453 5.807 - 5.831: 98.2603% ( 1) 00:11:47.453 5.831 - 5.855: 98.2681% ( 1) 00:11:47.453 5.950 - 5.973: 98.2760% ( 1) 00:11:47.453 5.973 - 5.997: 98.2839% ( 1) 00:11:47.453 6.044 - 6.068: 98.2917% ( 1) 00:11:47.453 6.116 - 6.163: 98.2996% ( 1) 00:11:47.453 6.210 - 6.258: 98.3075% ( 1) 00:11:47.453 6.258 - 6.305: 98.3154% ( 1) 00:11:47.453 6.305 - 6.353: 98.3232% ( 1) 00:11:47.453 6.353 - 6.400: 98.3390% ( 2) 00:11:47.453 6.590 - 6.637: 98.3468% ( 1) 00:11:47.453 6.684 - 6.732: 98.3547% ( 1) 00:11:47.453 6.969 - 7.016: 98.3626% ( 1) 00:11:47.453 7.159 - 7.206: 98.3705% ( 1) 00:11:47.453 7.206 - 7.253: 98.3783% ( 1) 00:11:47.453 7.301 - 7.348: 98.3862% ( 1) 00:11:47.453 7.348 - 7.396: 98.4020% ( 2) 00:11:47.453 7.396 - 7.443: 98.4098% ( 1) 00:11:47.453 7.633 - 7.680: 98.4256% ( 2) 00:11:47.453 7.727 - 7.775: 98.4334% ( 1) 00:11:47.453 7.964 - 8.012: 98.4571% ( 3) 00:11:47.453 8.012 - 8.059: 98.4728% ( 2) 00:11:47.453 8.059 - 8.107: 98.4807% ( 1) 00:11:47.453 8.154 - 8.201: 98.4964% ( 2) 00:11:47.453 8.201 - 8.249: 98.5122% ( 2) 00:11:47.453 8.296 - 8.344: 98.5279% ( 2) 00:11:47.453 8.344 - 8.391: 98.5437% ( 2) 00:11:47.453 8.391 - 8.439: 98.5673% ( 3) 00:11:47.453 8.439 - 8.486: 98.5909% ( 3) 00:11:47.453 8.533 - 8.581: 98.5988% ( 1) 00:11:47.453 8.581 - 8.628: 98.6066% ( 1) 00:11:47.453 8.628 - 8.676: 98.6145% ( 1) 00:11:47.453 8.676 - 8.723: 98.6224% ( 1) 00:11:47.453 8.818 - 8.865: 98.6302% ( 1) 00:11:47.453 8.865 - 8.913: 98.6381% ( 1) 00:11:47.453 9.150 - 9.197: 98.6460% ( 1) 00:11:47.453 9.292 - 9.339: 98.6539% ( 1) 00:11:47.453 9.339 - 9.387: 98.6617% ( 1) 00:11:47.453 9.481 - 9.529: 98.6696% ( 1) 00:11:47.453 9.529 - 9.576: 98.6775% ( 1) 00:11:47.453 9.576 - 9.624: 98.6853% ( 1) 00:11:47.453 9.624 - 9.671: 98.6932% ( 1) 00:11:47.453 9.766 - 9.813: 98.7011% ( 1) 00:11:47.453 9.813 - 9.861: 98.7090% ( 1) 00:11:47.453 10.098 - 10.145: 98.7168% ( 1) 00:11:47.453 10.240 - 10.287: 98.7247% ( 1) 00:11:47.453 10.572 - 10.619: 98.7326% ( 1) 00:11:47.453 10.667 - 10.714: 98.7405% ( 1) 00:11:47.453 10.809 - 10.856: 98.7483% ( 1) 00:11:47.453 11.093 - 11.141: 98.7562% ( 1) 00:11:47.453 11.188 - 11.236: 98.7641% ( 1) 00:11:47.453 11.236 - 11.283: 98.7719% ( 1) 00:11:47.453 11.473 - 11.520: 98.7877% ( 2) 00:11:47.453 11.947 - 11.994: 98.7956% ( 1) 00:11:47.453 11.994 - 12.041: 98.8034% ( 1) 00:11:47.453 12.041 - 12.089: 98.8113% ( 1) 00:11:47.453 12.516 - 12.610: 98.8192% ( 1) 00:11:47.453 12.800 - 12.895: 98.8270% ( 1) 00:11:47.453 13.084 - 13.179: 98.8349% ( 1) 00:11:47.453 13.369 - 13.464: 98.8428% ( 1) 00:11:47.453 13.653 - 13.748: 98.8507% ( 1) 00:11:47.453 13.938 - 14.033: 98.8585% ( 1) 00:11:47.453 14.033 - 14.127: 98.8664% ( 1) 00:11:47.453 14.222 - 14.317: 98.8743% ( 1) 00:11:47.453 14.507 - 14.601: 98.8822% ( 1) 00:11:47.453 14.601 - 14.696: 98.8979% ( 2) 00:11:47.453 14.791 - 14.886: 98.9058% ( 1) 00:11:47.453 15.265 - 15.360: 98.9136% ( 1) 00:11:47.453 15.834 - 15.929: 98.9215% ( 1) 00:11:47.453 17.161 - 17.256: 98.9294% ( 1) 00:11:47.453 17.256 - 17.351: 98.9373% ( 1) 00:11:47.453 17.351 - 17.446: 98.9451% ( 1) 00:11:47.453 17.446 - 17.541: 98.9530% ( 1) 00:11:47.453 17.541 - 17.636: 98.9924% ( 5) 00:11:47.453 17.636 - 17.730: 99.0239% ( 4) 00:11:47.453 17.730 - 17.825: 99.0711% ( 6) 00:11:47.453 17.825 - 17.920: 99.1262% ( 7) 00:11:47.453 17.920 - 18.015: 99.1813% ( 7) 00:11:47.453 18.015 - 18.110: 99.2207% ( 5) 00:11:47.453 18.110 - 18.204: 99.2836% ( 8) 00:11:47.453 18.204 - 18.299: 99.3702% ( 11) 00:11:47.453 18.299 - 18.394: 99.4489% ( 10) 00:11:47.453 18.394 - 18.489: 99.5198% ( 9) 00:11:47.453 18.489 - 18.584: 99.5906% ( 9) 00:11:47.453 18.584 - 18.679: 99.6694% ( 10) 00:11:47.453 18.679 - 18.773: 99.7166% ( 6) 00:11:47.453 18.773 - 18.868: 99.7560% ( 5) 00:11:47.453 18.868 - 18.963: 99.7875% ( 4) 00:11:47.453 19.058 - 19.153: 99.8032% ( 2) 00:11:47.453 19.153 - 19.247: 99.8268% ( 3) 00:11:47.453 19.627 - 19.721: 99.8347% ( 1) 00:11:47.453 19.816 - 19.911: 99.8426% ( 1) 00:11:47.453 20.006 - 20.101: 99.8504% ( 1) 00:11:47.453 20.101 - 20.196: 99.8583% ( 1) 00:11:47.453 20.385 - 20.480: 99.8740% ( 2) 00:11:47.453 21.713 - 21.807: 99.8819% ( 1) 00:11:47.453 22.661 - 22.756: 99.8898% ( 1) 00:11:47.453 26.738 - 26.927: 99.8977% ( 1) 00:11:47.453 28.444 - 28.634: 99.9055% ( 1) 00:11:47.453 28.824 - 29.013: 99.9134% ( 1) 00:11:47.453 29.013 - 29.203: 99.9213% ( 1) 00:11:47.453 29.393 - 29.582: 99.9292% ( 1) 00:11:47.453 29.961 - 30.151: 99.9370% ( 1) 00:11:47.453 3980.705 - 4004.978: 99.9843% ( 6) 00:11:47.453 4004.978 - 4029.250: 100.0000% ( 2) 00:11:47.453 00:11:47.453 Complete histogram 00:11:47.453 ================== 00:11:47.453 Range in us Cumulative Count 00:11:47.453 2.062 - 2.074: 1.3776% ( 175) 00:11:47.453 2.074 - 2.086: 15.1539% ( 1750) 00:11:47.453 2.086 - 2.098: 18.3185% ( 402) 00:11:47.453 2.098 - 2.110: 33.8739% ( 1976) 00:11:47.453 2.110 - 2.121: 57.1676% ( 2959) 00:11:47.453 2.121 - 2.133: 59.6158% ( 311) 00:11:47.453 2.133 - 2.145: 63.0324% ( 434) 00:11:47.453 2.145 - 2.157: 66.6850% ( 464) 00:11:47.453 2.157 - 2.169: 67.5667% ( 112) 00:11:47.453 2.169 - 2.181: 73.2347% ( 720) 00:11:47.453 2.181 - 2.193: 79.0050% ( 733) 00:11:47.453 2.193 - 2.204: 79.7764% ( 98) 00:11:47.453 2.204 - 2.216: 80.8156% ( 132) 00:11:47.453 2.216 - 2.228: 83.0906% ( 289) 00:11:47.453 2.228 - 2.240: 84.8067% ( 218) 00:11:47.453 2.240 - 2.252: 87.8533% ( 387) 00:11:47.453 2.252 - 2.264: 92.0885% ( 538) 00:11:47.453 2.264 - 2.276: 92.8206% ( 93) 00:11:47.453 2.276 - 2.287: 93.3953% ( 73) 00:11:47.453 2.287 - 2.299: 93.9227% ( 67) 00:11:47.453 2.299 - 2.311: 94.6548% ( 93) 00:11:47.453 2.311 - 2.323: 95.0169% ( 46) 00:11:47.453 2.323 - 2.335: 95.2137% ( 25) 00:11:47.453 2.335 - 2.347: 95.3633% ( 19) 00:11:47.453 2.347 - 2.359: 95.4814% ( 15) 00:11:47.453 2.359 - 2.370: 95.6624% ( 23) 00:11:47.453 2.370 - 2.382: 95.8671% ( 26) 00:11:47.453 2.382 - 2.394: 96.1348% ( 34) 00:11:47.453 2.394 - 2.406: 96.3316% ( 25) 00:11:47.453 2.406 - 2.418: 96.5677% ( 30) 00:11:47.453 2.418 - 2.430: 96.8511% ( 36) 00:11:47.453 2.430 - 2.441: 97.1188% ( 34) 00:11:47.453 2.441 - 2.453: 97.2762% ( 20) 00:11:47.453 2.453 - 2.465: 97.4494% ( 22) 00:11:47.453 2.465 - 2.477: 97.6620% ( 27) 00:11:47.453 2.477 - 2.489: 97.7879% ( 16) 00:11:47.453 2.489 - 2.501: 97.9847% ( 25) 00:11:47.453 2.501 - 2.513: 98.0713% ( 11) 00:11:47.453 2.513 - 2.524: 98.1343% ( 8) 00:11:47.453 2.524 - 2.536: 98.1894% ( 7) 00:11:47.453 2.536 - 2.548: 98.2681% ( 10) 00:11:47.453 2.548 - 2.560: 98.3075% ( 5) 00:11:47.453 2.560 - 2.572: 98.3232% ( 2) 00:11:47.453 2.572 - 2.584: 98.3390% ( 2) 00:11:47.453 2.584 - 2.596: 98.3547% ( 2) 00:11:47.453 2.607 - 2.619: 98.3705% ( 2) 00:11:47.453 2.619 - 2.631: 98.3783% ( 1) 00:11:47.453 2.643 - 2.655: 98.3941% ( 2) 00:11:47.453 2.702 - 2.714: 98.4020% ( 1) 00:11:47.453 2.809 - 2.821: 98.4098% ( 1) 00:11:47.453 2.821 - 2.833: 98.4177% ( 1) 00:11:47.453 2.833 - 2.844: 98.4334% ( 2) 00:11:47.453 2.844 - 2.856: 98.4413% ( 1) 00:11:47.453 2.856 - 2.868: 98.4492% ( 1) 00:11:47.453 2.916 - 2.927: 98.4571% ( 1) 00:11:47.454 3.153 - 3.176: 98.4728% ( 2) 00:11:47.454 3.271 - 3.295: 98.4807% ( 1) 00:11:47.454 3.319 - 3.342: 98.4885% ( 1) 00:11:47.454 3.342 - 3.366: 98.5043% ( 2) 00:11:47.454 3.390 - 3.413: 98.5279% ( 3) 00:11:47.454 3.413 - 3.437: 98.5437% ( 2) 00:11:47.454 3.437 - 3.461: 98.5594% ( 2) 00:11:47.454 3.532 - 3.556: 98.5751% ( 2) 00:11:47.454 3.556 - 3.579: 98.5830% ( 1) 00:11:47.454 3.579 - 3.603: 98.5988% ( 2) 00:11:47.454 3.603 - 3.627: 98.6066% ( 1) 00:11:47.454 3.627 - 3.650: 98.6145% ( 1) 00:11:47.454 3.650 - 3.674: 98.6224% ( 1) 00:11:47.454 3.698 - 3.721: 9[2024-07-12 15:48:16.782074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:47.454 8.6381% ( 2) 00:11:47.454 3.721 - 3.745: 98.6617% ( 3) 00:11:47.454 3.769 - 3.793: 98.6696% ( 1) 00:11:47.454 3.840 - 3.864: 98.6775% ( 1) 00:11:47.454 3.864 - 3.887: 98.6853% ( 1) 00:11:47.454 3.959 - 3.982: 98.6932% ( 1) 00:11:47.454 3.982 - 4.006: 98.7168% ( 3) 00:11:47.454 4.030 - 4.053: 98.7247% ( 1) 00:11:47.454 4.907 - 4.930: 98.7326% ( 1) 00:11:47.454 5.049 - 5.073: 98.7405% ( 1) 00:11:47.454 5.144 - 5.167: 98.7483% ( 1) 00:11:47.454 5.262 - 5.286: 98.7562% ( 1) 00:11:47.454 5.310 - 5.333: 98.7641% ( 1) 00:11:47.454 5.428 - 5.452: 98.7719% ( 1) 00:11:47.454 6.116 - 6.163: 98.7877% ( 2) 00:11:47.454 6.210 - 6.258: 98.8034% ( 2) 00:11:47.454 6.353 - 6.400: 98.8192% ( 2) 00:11:47.454 6.400 - 6.447: 98.8270% ( 1) 00:11:47.454 6.590 - 6.637: 98.8349% ( 1) 00:11:47.454 6.637 - 6.684: 98.8428% ( 1) 00:11:47.454 6.684 - 6.732: 98.8507% ( 1) 00:11:47.454 6.874 - 6.921: 98.8585% ( 1) 00:11:47.454 7.490 - 7.538: 98.8664% ( 1) 00:11:47.454 7.917 - 7.964: 98.8743% ( 1) 00:11:47.454 15.644 - 15.739: 98.8900% ( 2) 00:11:47.454 15.739 - 15.834: 98.9136% ( 3) 00:11:47.454 15.834 - 15.929: 98.9294% ( 2) 00:11:47.454 15.929 - 16.024: 98.9609% ( 4) 00:11:47.454 16.024 - 16.119: 99.0002% ( 5) 00:11:47.454 16.119 - 16.213: 99.0160% ( 2) 00:11:47.454 16.213 - 16.308: 99.0553% ( 5) 00:11:47.454 16.308 - 16.403: 99.0711% ( 2) 00:11:47.454 16.403 - 16.498: 99.0947% ( 3) 00:11:47.454 16.498 - 16.593: 99.1183% ( 3) 00:11:47.454 16.593 - 16.687: 99.1656% ( 6) 00:11:47.454 16.687 - 16.782: 99.1734% ( 1) 00:11:47.454 16.782 - 16.877: 99.2285% ( 7) 00:11:47.454 16.877 - 16.972: 99.2600% ( 4) 00:11:47.454 16.972 - 17.067: 99.2758% ( 2) 00:11:47.454 17.067 - 17.161: 99.2915% ( 2) 00:11:47.454 17.161 - 17.256: 99.2994% ( 1) 00:11:47.454 17.256 - 17.351: 99.3151% ( 2) 00:11:47.454 17.351 - 17.446: 99.3230% ( 1) 00:11:47.454 17.446 - 17.541: 99.3309% ( 1) 00:11:47.454 17.541 - 17.636: 99.3387% ( 1) 00:11:47.454 17.636 - 17.730: 99.3545% ( 2) 00:11:47.454 17.730 - 17.825: 99.3624% ( 1) 00:11:47.454 18.394 - 18.489: 99.3702% ( 1) 00:11:47.454 18.584 - 18.679: 99.3781% ( 1) 00:11:47.454 18.679 - 18.773: 99.3860% ( 1) 00:11:47.454 3252.527 - 3276.800: 99.3938% ( 1) 00:11:47.454 3980.705 - 4004.978: 99.9292% ( 68) 00:11:47.454 4004.978 - 4029.250: 100.0000% ( 9) 00:11:47.454 00:11:47.454 15:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:47.454 15:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:47.454 15:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:47.454 15:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:47.454 15:48:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:47.454 [ 00:11:47.454 { 00:11:47.454 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:47.454 "subtype": "Discovery", 00:11:47.454 "listen_addresses": [], 00:11:47.454 "allow_any_host": true, 00:11:47.454 "hosts": [] 00:11:47.454 }, 00:11:47.454 { 00:11:47.454 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:47.454 "subtype": "NVMe", 00:11:47.454 "listen_addresses": [ 00:11:47.454 { 00:11:47.454 "trtype": "VFIOUSER", 00:11:47.454 "adrfam": "IPv4", 00:11:47.454 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:47.454 "trsvcid": "0" 00:11:47.454 } 00:11:47.454 ], 00:11:47.454 "allow_any_host": true, 00:11:47.454 "hosts": [], 00:11:47.454 "serial_number": "SPDK1", 00:11:47.454 "model_number": "SPDK bdev Controller", 00:11:47.454 "max_namespaces": 32, 00:11:47.454 "min_cntlid": 1, 00:11:47.454 "max_cntlid": 65519, 00:11:47.454 "namespaces": [ 00:11:47.454 { 00:11:47.454 "nsid": 1, 00:11:47.454 "bdev_name": "Malloc1", 00:11:47.454 "name": "Malloc1", 00:11:47.454 "nguid": "68BF07FE5FB04124BAD5FDAEB80F808B", 00:11:47.454 "uuid": "68bf07fe-5fb0-4124-bad5-fdaeb80f808b" 00:11:47.454 } 00:11:47.454 ] 00:11:47.454 }, 00:11:47.454 { 00:11:47.454 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:47.454 "subtype": "NVMe", 00:11:47.454 "listen_addresses": [ 00:11:47.454 { 00:11:47.454 "trtype": "VFIOUSER", 00:11:47.454 "adrfam": "IPv4", 00:11:47.454 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:47.454 "trsvcid": "0" 00:11:47.454 } 00:11:47.454 ], 00:11:47.454 "allow_any_host": true, 00:11:47.454 "hosts": [], 00:11:47.454 "serial_number": "SPDK2", 00:11:47.454 "model_number": "SPDK bdev Controller", 00:11:47.454 "max_namespaces": 32, 00:11:47.454 "min_cntlid": 1, 00:11:47.454 "max_cntlid": 65519, 00:11:47.454 "namespaces": [ 00:11:47.454 { 00:11:47.454 "nsid": 1, 00:11:47.454 "bdev_name": "Malloc2", 00:11:47.454 "name": "Malloc2", 00:11:47.454 "nguid": "D75C092CBA084CB1A22D1177BE9468EA", 00:11:47.454 "uuid": "d75c092c-ba08-4cb1-a22d-1177be9468ea" 00:11:47.454 } 00:11:47.454 ] 00:11:47.454 } 00:11:47.454 ] 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4169902 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:47.454 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:47.454 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.712 [2024-07-12 15:48:17.273796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:47.712 Malloc3 00:11:47.712 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:47.969 [2024-07-12 15:48:17.626461] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:47.969 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:47.969 Asynchronous Event Request test 00:11:47.969 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.969 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:47.969 Registering asynchronous event callbacks... 00:11:47.969 Starting namespace attribute notice tests for all controllers... 00:11:47.969 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:47.969 aer_cb - Changed Namespace 00:11:47.969 Cleaning up... 00:11:48.226 [ 00:11:48.226 { 00:11:48.226 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:48.226 "subtype": "Discovery", 00:11:48.226 "listen_addresses": [], 00:11:48.226 "allow_any_host": true, 00:11:48.226 "hosts": [] 00:11:48.226 }, 00:11:48.226 { 00:11:48.226 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:48.226 "subtype": "NVMe", 00:11:48.227 "listen_addresses": [ 00:11:48.227 { 00:11:48.227 "trtype": "VFIOUSER", 00:11:48.227 "adrfam": "IPv4", 00:11:48.227 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:48.227 "trsvcid": "0" 00:11:48.227 } 00:11:48.227 ], 00:11:48.227 "allow_any_host": true, 00:11:48.227 "hosts": [], 00:11:48.227 "serial_number": "SPDK1", 00:11:48.227 "model_number": "SPDK bdev Controller", 00:11:48.227 "max_namespaces": 32, 00:11:48.227 "min_cntlid": 1, 00:11:48.227 "max_cntlid": 65519, 00:11:48.227 "namespaces": [ 00:11:48.227 { 00:11:48.227 "nsid": 1, 00:11:48.227 "bdev_name": "Malloc1", 00:11:48.227 "name": "Malloc1", 00:11:48.227 "nguid": "68BF07FE5FB04124BAD5FDAEB80F808B", 00:11:48.227 "uuid": "68bf07fe-5fb0-4124-bad5-fdaeb80f808b" 00:11:48.227 }, 00:11:48.227 { 00:11:48.227 "nsid": 2, 00:11:48.227 "bdev_name": "Malloc3", 00:11:48.227 "name": "Malloc3", 00:11:48.227 "nguid": "6A0E935EEF3D46969AE794EF1D8BE1D9", 00:11:48.227 "uuid": "6a0e935e-ef3d-4696-9ae7-94ef1d8be1d9" 00:11:48.227 } 00:11:48.227 ] 00:11:48.227 }, 00:11:48.227 { 00:11:48.227 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:48.227 "subtype": "NVMe", 00:11:48.227 "listen_addresses": [ 00:11:48.227 { 00:11:48.227 "trtype": "VFIOUSER", 00:11:48.227 "adrfam": "IPv4", 00:11:48.227 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:48.227 "trsvcid": "0" 00:11:48.227 } 00:11:48.227 ], 00:11:48.227 "allow_any_host": true, 00:11:48.227 "hosts": [], 00:11:48.227 "serial_number": "SPDK2", 00:11:48.227 "model_number": "SPDK bdev Controller", 00:11:48.227 "max_namespaces": 32, 00:11:48.227 "min_cntlid": 1, 00:11:48.227 "max_cntlid": 65519, 00:11:48.227 "namespaces": [ 00:11:48.227 { 00:11:48.227 "nsid": 1, 00:11:48.227 "bdev_name": "Malloc2", 00:11:48.227 "name": "Malloc2", 00:11:48.227 "nguid": "D75C092CBA084CB1A22D1177BE9468EA", 00:11:48.227 "uuid": "d75c092c-ba08-4cb1-a22d-1177be9468ea" 00:11:48.227 } 00:11:48.227 ] 00:11:48.227 } 00:11:48.227 ] 00:11:48.227 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4169902 00:11:48.227 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:48.227 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:48.227 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:48.227 15:48:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:48.227 [2024-07-12 15:48:17.910120] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:11:48.227 [2024-07-12 15:48:17.910164] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170037 ] 00:11:48.227 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.227 [2024-07-12 15:48:17.944441] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:48.227 [2024-07-12 15:48:17.952641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:48.227 [2024-07-12 15:48:17.952670] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb602636000 00:11:48.227 [2024-07-12 15:48:17.953631] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.227 [2024-07-12 15:48:17.954629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.486 [2024-07-12 15:48:17.955639] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.487 [2024-07-12 15:48:17.956648] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:48.487 [2024-07-12 15:48:17.957659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:48.487 [2024-07-12 15:48:17.958681] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.487 [2024-07-12 15:48:17.959685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:48.487 [2024-07-12 15:48:17.960696] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:48.487 [2024-07-12 15:48:17.961703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:48.487 [2024-07-12 15:48:17.961724] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb60262b000 00:11:48.487 [2024-07-12 15:48:17.962838] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:48.487 [2024-07-12 15:48:17.975031] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:48.487 [2024-07-12 15:48:17.975061] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:48.487 [2024-07-12 15:48:17.984191] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:48.487 [2024-07-12 15:48:17.984243] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:48.487 [2024-07-12 15:48:17.984360] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:48.487 [2024-07-12 15:48:17.984392] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:48.487 [2024-07-12 15:48:17.984403] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:48.487 [2024-07-12 15:48:17.985201] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:48.487 [2024-07-12 15:48:17.985222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:48.487 [2024-07-12 15:48:17.985235] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:48.487 [2024-07-12 15:48:17.986208] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:48.487 [2024-07-12 15:48:17.986228] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:48.487 [2024-07-12 15:48:17.986242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:48.487 [2024-07-12 15:48:17.987219] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:48.487 [2024-07-12 15:48:17.987240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:48.487 [2024-07-12 15:48:17.988225] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:48.487 [2024-07-12 15:48:17.988245] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:48.487 [2024-07-12 15:48:17.988254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:48.487 [2024-07-12 15:48:17.988265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:48.487 [2024-07-12 15:48:17.988375] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:48.487 [2024-07-12 15:48:17.988386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:48.487 [2024-07-12 15:48:17.988395] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:48.487 [2024-07-12 15:48:17.989231] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:48.487 [2024-07-12 15:48:17.990242] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:48.487 [2024-07-12 15:48:17.991257] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:48.487 [2024-07-12 15:48:17.992255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:48.487 [2024-07-12 15:48:17.992342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:48.487 [2024-07-12 15:48:17.993269] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:48.487 [2024-07-12 15:48:17.993289] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:48.487 [2024-07-12 15:48:17.993322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:17.993349] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:48.487 [2024-07-12 15:48:17.993363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:17.993383] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:48.487 [2024-07-12 15:48:17.993393] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.487 [2024-07-12 15:48:17.993411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.487 [2024-07-12 15:48:17.997333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:48.487 [2024-07-12 15:48:17.997355] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:48.487 [2024-07-12 15:48:17.997364] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:48.487 [2024-07-12 15:48:17.997372] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:48.487 [2024-07-12 15:48:17.997380] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:48.487 [2024-07-12 15:48:17.997388] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:48.487 [2024-07-12 15:48:17.997396] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:48.487 [2024-07-12 15:48:17.997404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:17.997417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:17.997438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:48.487 [2024-07-12 15:48:18.005325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:48.487 [2024-07-12 15:48:18.005378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.487 [2024-07-12 15:48:18.005393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.487 [2024-07-12 15:48:18.005406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.487 [2024-07-12 15:48:18.005418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.487 [2024-07-12 15:48:18.005427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.005444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.005460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:48.487 [2024-07-12 15:48:18.013324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:48.487 [2024-07-12 15:48:18.013342] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:48.487 [2024-07-12 15:48:18.013356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.013372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.013383] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.013397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:48.487 [2024-07-12 15:48:18.021323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:48.487 [2024-07-12 15:48:18.021398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.021415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.021428] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:48.487 [2024-07-12 15:48:18.021437] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:48.487 [2024-07-12 15:48:18.021447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:48.487 [2024-07-12 15:48:18.029326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:48.487 [2024-07-12 15:48:18.029354] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:48.487 [2024-07-12 15:48:18.029371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.029385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.029398] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:48.487 [2024-07-12 15:48:18.029407] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.487 [2024-07-12 15:48:18.029416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.487 [2024-07-12 15:48:18.037327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:48.487 [2024-07-12 15:48:18.037356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:48.487 [2024-07-12 15:48:18.037372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.037386] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:48.488 [2024-07-12 15:48:18.037394] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.488 [2024-07-12 15:48:18.037404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.045324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.045344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.045361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.045376] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.045386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.045395] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.045403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.045411] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:48.488 [2024-07-12 15:48:18.045419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:48.488 [2024-07-12 15:48:18.045428] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:48.488 [2024-07-12 15:48:18.045453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.053325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.053353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.061327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.061351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.069327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.069352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.077341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.077372] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:48.488 [2024-07-12 15:48:18.077384] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:48.488 [2024-07-12 15:48:18.077390] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:48.488 [2024-07-12 15:48:18.077396] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:48.488 [2024-07-12 15:48:18.077406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:48.488 [2024-07-12 15:48:18.077418] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:48.488 [2024-07-12 15:48:18.077426] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:48.488 [2024-07-12 15:48:18.077435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.077446] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:48.488 [2024-07-12 15:48:18.077454] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:48.488 [2024-07-12 15:48:18.077463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.077480] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:48.488 [2024-07-12 15:48:18.077489] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:48.488 [2024-07-12 15:48:18.077498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:48.488 [2024-07-12 15:48:18.085328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.085356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.085374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:48.488 [2024-07-12 15:48:18.085386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:48.488 ===================================================== 00:11:48.488 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:48.488 ===================================================== 00:11:48.488 Controller Capabilities/Features 00:11:48.488 ================================ 00:11:48.488 Vendor ID: 4e58 00:11:48.488 Subsystem Vendor ID: 4e58 00:11:48.488 Serial Number: SPDK2 00:11:48.488 Model Number: SPDK bdev Controller 00:11:48.488 Firmware Version: 24.09 00:11:48.488 Recommended Arb Burst: 6 00:11:48.488 IEEE OUI Identifier: 8d 6b 50 00:11:48.488 Multi-path I/O 00:11:48.488 May have multiple subsystem ports: Yes 00:11:48.488 May have multiple controllers: Yes 00:11:48.488 Associated with SR-IOV VF: No 00:11:48.488 Max Data Transfer Size: 131072 00:11:48.488 Max Number of Namespaces: 32 00:11:48.488 Max Number of I/O Queues: 127 00:11:48.488 NVMe Specification Version (VS): 1.3 00:11:48.488 NVMe Specification Version (Identify): 1.3 00:11:48.488 Maximum Queue Entries: 256 00:11:48.488 Contiguous Queues Required: Yes 00:11:48.488 Arbitration Mechanisms Supported 00:11:48.488 Weighted Round Robin: Not Supported 00:11:48.488 Vendor Specific: Not Supported 00:11:48.488 Reset Timeout: 15000 ms 00:11:48.488 Doorbell Stride: 4 bytes 00:11:48.488 NVM Subsystem Reset: Not Supported 00:11:48.488 Command Sets Supported 00:11:48.488 NVM Command Set: Supported 00:11:48.488 Boot Partition: Not Supported 00:11:48.488 Memory Page Size Minimum: 4096 bytes 00:11:48.488 Memory Page Size Maximum: 4096 bytes 00:11:48.488 Persistent Memory Region: Not Supported 00:11:48.488 Optional Asynchronous Events Supported 00:11:48.488 Namespace Attribute Notices: Supported 00:11:48.488 Firmware Activation Notices: Not Supported 00:11:48.488 ANA Change Notices: Not Supported 00:11:48.488 PLE Aggregate Log Change Notices: Not Supported 00:11:48.488 LBA Status Info Alert Notices: Not Supported 00:11:48.488 EGE Aggregate Log Change Notices: Not Supported 00:11:48.488 Normal NVM Subsystem Shutdown event: Not Supported 00:11:48.488 Zone Descriptor Change Notices: Not Supported 00:11:48.488 Discovery Log Change Notices: Not Supported 00:11:48.488 Controller Attributes 00:11:48.488 128-bit Host Identifier: Supported 00:11:48.488 Non-Operational Permissive Mode: Not Supported 00:11:48.488 NVM Sets: Not Supported 00:11:48.488 Read Recovery Levels: Not Supported 00:11:48.488 Endurance Groups: Not Supported 00:11:48.488 Predictable Latency Mode: Not Supported 00:11:48.488 Traffic Based Keep ALive: Not Supported 00:11:48.488 Namespace Granularity: Not Supported 00:11:48.488 SQ Associations: Not Supported 00:11:48.488 UUID List: Not Supported 00:11:48.488 Multi-Domain Subsystem: Not Supported 00:11:48.488 Fixed Capacity Management: Not Supported 00:11:48.488 Variable Capacity Management: Not Supported 00:11:48.488 Delete Endurance Group: Not Supported 00:11:48.488 Delete NVM Set: Not Supported 00:11:48.488 Extended LBA Formats Supported: Not Supported 00:11:48.488 Flexible Data Placement Supported: Not Supported 00:11:48.488 00:11:48.488 Controller Memory Buffer Support 00:11:48.488 ================================ 00:11:48.488 Supported: No 00:11:48.488 00:11:48.488 Persistent Memory Region Support 00:11:48.488 ================================ 00:11:48.488 Supported: No 00:11:48.488 00:11:48.488 Admin Command Set Attributes 00:11:48.488 ============================ 00:11:48.488 Security Send/Receive: Not Supported 00:11:48.488 Format NVM: Not Supported 00:11:48.488 Firmware Activate/Download: Not Supported 00:11:48.488 Namespace Management: Not Supported 00:11:48.488 Device Self-Test: Not Supported 00:11:48.488 Directives: Not Supported 00:11:48.488 NVMe-MI: Not Supported 00:11:48.488 Virtualization Management: Not Supported 00:11:48.488 Doorbell Buffer Config: Not Supported 00:11:48.488 Get LBA Status Capability: Not Supported 00:11:48.488 Command & Feature Lockdown Capability: Not Supported 00:11:48.488 Abort Command Limit: 4 00:11:48.488 Async Event Request Limit: 4 00:11:48.488 Number of Firmware Slots: N/A 00:11:48.488 Firmware Slot 1 Read-Only: N/A 00:11:48.488 Firmware Activation Without Reset: N/A 00:11:48.488 Multiple Update Detection Support: N/A 00:11:48.488 Firmware Update Granularity: No Information Provided 00:11:48.488 Per-Namespace SMART Log: No 00:11:48.488 Asymmetric Namespace Access Log Page: Not Supported 00:11:48.488 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:48.488 Command Effects Log Page: Supported 00:11:48.488 Get Log Page Extended Data: Supported 00:11:48.488 Telemetry Log Pages: Not Supported 00:11:48.488 Persistent Event Log Pages: Not Supported 00:11:48.488 Supported Log Pages Log Page: May Support 00:11:48.488 Commands Supported & Effects Log Page: Not Supported 00:11:48.488 Feature Identifiers & Effects Log Page:May Support 00:11:48.488 NVMe-MI Commands & Effects Log Page: May Support 00:11:48.488 Data Area 4 for Telemetry Log: Not Supported 00:11:48.488 Error Log Page Entries Supported: 128 00:11:48.488 Keep Alive: Supported 00:11:48.488 Keep Alive Granularity: 10000 ms 00:11:48.488 00:11:48.488 NVM Command Set Attributes 00:11:48.488 ========================== 00:11:48.488 Submission Queue Entry Size 00:11:48.488 Max: 64 00:11:48.488 Min: 64 00:11:48.488 Completion Queue Entry Size 00:11:48.488 Max: 16 00:11:48.488 Min: 16 00:11:48.488 Number of Namespaces: 32 00:11:48.488 Compare Command: Supported 00:11:48.489 Write Uncorrectable Command: Not Supported 00:11:48.489 Dataset Management Command: Supported 00:11:48.489 Write Zeroes Command: Supported 00:11:48.489 Set Features Save Field: Not Supported 00:11:48.489 Reservations: Not Supported 00:11:48.489 Timestamp: Not Supported 00:11:48.489 Copy: Supported 00:11:48.489 Volatile Write Cache: Present 00:11:48.489 Atomic Write Unit (Normal): 1 00:11:48.489 Atomic Write Unit (PFail): 1 00:11:48.489 Atomic Compare & Write Unit: 1 00:11:48.489 Fused Compare & Write: Supported 00:11:48.489 Scatter-Gather List 00:11:48.489 SGL Command Set: Supported (Dword aligned) 00:11:48.489 SGL Keyed: Not Supported 00:11:48.489 SGL Bit Bucket Descriptor: Not Supported 00:11:48.489 SGL Metadata Pointer: Not Supported 00:11:48.489 Oversized SGL: Not Supported 00:11:48.489 SGL Metadata Address: Not Supported 00:11:48.489 SGL Offset: Not Supported 00:11:48.489 Transport SGL Data Block: Not Supported 00:11:48.489 Replay Protected Memory Block: Not Supported 00:11:48.489 00:11:48.489 Firmware Slot Information 00:11:48.489 ========================= 00:11:48.489 Active slot: 1 00:11:48.489 Slot 1 Firmware Revision: 24.09 00:11:48.489 00:11:48.489 00:11:48.489 Commands Supported and Effects 00:11:48.489 ============================== 00:11:48.489 Admin Commands 00:11:48.489 -------------- 00:11:48.489 Get Log Page (02h): Supported 00:11:48.489 Identify (06h): Supported 00:11:48.489 Abort (08h): Supported 00:11:48.489 Set Features (09h): Supported 00:11:48.489 Get Features (0Ah): Supported 00:11:48.489 Asynchronous Event Request (0Ch): Supported 00:11:48.489 Keep Alive (18h): Supported 00:11:48.489 I/O Commands 00:11:48.489 ------------ 00:11:48.489 Flush (00h): Supported LBA-Change 00:11:48.489 Write (01h): Supported LBA-Change 00:11:48.489 Read (02h): Supported 00:11:48.489 Compare (05h): Supported 00:11:48.489 Write Zeroes (08h): Supported LBA-Change 00:11:48.489 Dataset Management (09h): Supported LBA-Change 00:11:48.489 Copy (19h): Supported LBA-Change 00:11:48.489 00:11:48.489 Error Log 00:11:48.489 ========= 00:11:48.489 00:11:48.489 Arbitration 00:11:48.489 =========== 00:11:48.489 Arbitration Burst: 1 00:11:48.489 00:11:48.489 Power Management 00:11:48.489 ================ 00:11:48.489 Number of Power States: 1 00:11:48.489 Current Power State: Power State #0 00:11:48.489 Power State #0: 00:11:48.489 Max Power: 0.00 W 00:11:48.489 Non-Operational State: Operational 00:11:48.489 Entry Latency: Not Reported 00:11:48.489 Exit Latency: Not Reported 00:11:48.489 Relative Read Throughput: 0 00:11:48.489 Relative Read Latency: 0 00:11:48.489 Relative Write Throughput: 0 00:11:48.489 Relative Write Latency: 0 00:11:48.489 Idle Power: Not Reported 00:11:48.489 Active Power: Not Reported 00:11:48.489 Non-Operational Permissive Mode: Not Supported 00:11:48.489 00:11:48.489 Health Information 00:11:48.489 ================== 00:11:48.489 Critical Warnings: 00:11:48.489 Available Spare Space: OK 00:11:48.489 Temperature: OK 00:11:48.489 Device Reliability: OK 00:11:48.489 Read Only: No 00:11:48.489 Volatile Memory Backup: OK 00:11:48.489 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:48.489 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:48.489 Available Spare: 0% 00:11:48.489 Available Sp[2024-07-12 15:48:18.085509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:48.489 [2024-07-12 15:48:18.093324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:48.489 [2024-07-12 15:48:18.093375] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:48.489 [2024-07-12 15:48:18.093393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.489 [2024-07-12 15:48:18.093404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.489 [2024-07-12 15:48:18.093414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.489 [2024-07-12 15:48:18.093423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.489 [2024-07-12 15:48:18.093502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:48.489 [2024-07-12 15:48:18.093523] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:48.489 [2024-07-12 15:48:18.094503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:48.489 [2024-07-12 15:48:18.094575] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:48.489 [2024-07-12 15:48:18.094590] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:48.489 [2024-07-12 15:48:18.095513] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:48.489 [2024-07-12 15:48:18.095538] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:48.489 [2024-07-12 15:48:18.095589] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:48.489 [2024-07-12 15:48:18.098327] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:48.489 are Threshold: 0% 00:11:48.489 Life Percentage Used: 0% 00:11:48.489 Data Units Read: 0 00:11:48.489 Data Units Written: 0 00:11:48.489 Host Read Commands: 0 00:11:48.489 Host Write Commands: 0 00:11:48.489 Controller Busy Time: 0 minutes 00:11:48.489 Power Cycles: 0 00:11:48.489 Power On Hours: 0 hours 00:11:48.489 Unsafe Shutdowns: 0 00:11:48.489 Unrecoverable Media Errors: 0 00:11:48.489 Lifetime Error Log Entries: 0 00:11:48.489 Warning Temperature Time: 0 minutes 00:11:48.489 Critical Temperature Time: 0 minutes 00:11:48.489 00:11:48.489 Number of Queues 00:11:48.489 ================ 00:11:48.489 Number of I/O Submission Queues: 127 00:11:48.489 Number of I/O Completion Queues: 127 00:11:48.489 00:11:48.489 Active Namespaces 00:11:48.489 ================= 00:11:48.489 Namespace ID:1 00:11:48.489 Error Recovery Timeout: Unlimited 00:11:48.489 Command Set Identifier: NVM (00h) 00:11:48.489 Deallocate: Supported 00:11:48.489 Deallocated/Unwritten Error: Not Supported 00:11:48.489 Deallocated Read Value: Unknown 00:11:48.489 Deallocate in Write Zeroes: Not Supported 00:11:48.489 Deallocated Guard Field: 0xFFFF 00:11:48.489 Flush: Supported 00:11:48.489 Reservation: Supported 00:11:48.489 Namespace Sharing Capabilities: Multiple Controllers 00:11:48.489 Size (in LBAs): 131072 (0GiB) 00:11:48.489 Capacity (in LBAs): 131072 (0GiB) 00:11:48.489 Utilization (in LBAs): 131072 (0GiB) 00:11:48.489 NGUID: D75C092CBA084CB1A22D1177BE9468EA 00:11:48.489 UUID: d75c092c-ba08-4cb1-a22d-1177be9468ea 00:11:48.489 Thin Provisioning: Not Supported 00:11:48.489 Per-NS Atomic Units: Yes 00:11:48.489 Atomic Boundary Size (Normal): 0 00:11:48.489 Atomic Boundary Size (PFail): 0 00:11:48.489 Atomic Boundary Offset: 0 00:11:48.489 Maximum Single Source Range Length: 65535 00:11:48.489 Maximum Copy Length: 65535 00:11:48.489 Maximum Source Range Count: 1 00:11:48.489 NGUID/EUI64 Never Reused: No 00:11:48.489 Namespace Write Protected: No 00:11:48.489 Number of LBA Formats: 1 00:11:48.489 Current LBA Format: LBA Format #00 00:11:48.489 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:48.489 00:11:48.489 15:48:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:48.489 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.747 [2024-07-12 15:48:18.327081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:54.043 Initializing NVMe Controllers 00:11:54.043 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:54.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:54.043 Initialization complete. Launching workers. 00:11:54.043 ======================================================== 00:11:54.043 Latency(us) 00:11:54.043 Device Information : IOPS MiB/s Average min max 00:11:54.043 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34808.29 135.97 3676.45 1169.84 7533.55 00:11:54.043 ======================================================== 00:11:54.043 Total : 34808.29 135.97 3676.45 1169.84 7533.55 00:11:54.043 00:11:54.043 [2024-07-12 15:48:23.432713] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:54.043 15:48:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:54.043 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.043 [2024-07-12 15:48:23.676352] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:59.306 Initializing NVMe Controllers 00:11:59.306 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:59.306 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:59.306 Initialization complete. Launching workers. 00:11:59.306 ======================================================== 00:11:59.306 Latency(us) 00:11:59.306 Device Information : IOPS MiB/s Average min max 00:11:59.306 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32551.39 127.15 3932.10 1215.05 11345.47 00:11:59.306 ======================================================== 00:11:59.306 Total : 32551.39 127.15 3932.10 1215.05 11345.47 00:11:59.306 00:11:59.306 [2024-07-12 15:48:28.697603] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:59.306 15:48:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:59.306 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.306 [2024-07-12 15:48:28.913158] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:04.569 [2024-07-12 15:48:34.053468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:04.569 Initializing NVMe Controllers 00:12:04.569 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:04.569 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:04.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:04.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:04.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:04.569 Initialization complete. Launching workers. 00:12:04.569 Starting thread on core 2 00:12:04.569 Starting thread on core 3 00:12:04.569 Starting thread on core 1 00:12:04.569 15:48:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:04.569 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.828 [2024-07-12 15:48:34.364472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:08.108 [2024-07-12 15:48:37.449302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:08.108 Initializing NVMe Controllers 00:12:08.108 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:08.108 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:08.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:08.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:08.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:08.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:08.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:08.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:08.109 Initialization complete. Launching workers. 00:12:08.109 Starting thread on core 1 with urgent priority queue 00:12:08.109 Starting thread on core 2 with urgent priority queue 00:12:08.109 Starting thread on core 3 with urgent priority queue 00:12:08.109 Starting thread on core 0 with urgent priority queue 00:12:08.109 SPDK bdev Controller (SPDK2 ) core 0: 6711.67 IO/s 14.90 secs/100000 ios 00:12:08.109 SPDK bdev Controller (SPDK2 ) core 1: 6778.67 IO/s 14.75 secs/100000 ios 00:12:08.109 SPDK bdev Controller (SPDK2 ) core 2: 6698.00 IO/s 14.93 secs/100000 ios 00:12:08.109 SPDK bdev Controller (SPDK2 ) core 3: 6203.00 IO/s 16.12 secs/100000 ios 00:12:08.109 ======================================================== 00:12:08.109 00:12:08.109 15:48:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:08.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.109 [2024-07-12 15:48:37.743814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:08.109 Initializing NVMe Controllers 00:12:08.109 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:08.109 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:08.109 Namespace ID: 1 size: 0GB 00:12:08.109 Initialization complete. 00:12:08.109 INFO: using host memory buffer for IO 00:12:08.109 Hello world! 00:12:08.109 [2024-07-12 15:48:37.754887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:08.109 15:48:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:08.365 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.365 [2024-07-12 15:48:38.053735] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:09.735 Initializing NVMe Controllers 00:12:09.735 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:09.735 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:09.735 Initialization complete. Launching workers. 00:12:09.735 submit (in ns) avg, min, max = 7634.9, 3578.9, 4017546.7 00:12:09.735 complete (in ns) avg, min, max = 23977.2, 2058.9, 4025941.1 00:12:09.735 00:12:09.735 Submit histogram 00:12:09.735 ================ 00:12:09.735 Range in us Cumulative Count 00:12:09.735 3.556 - 3.579: 0.0230% ( 3) 00:12:09.735 3.579 - 3.603: 1.9747% ( 255) 00:12:09.735 3.603 - 3.627: 6.9575% ( 651) 00:12:09.735 3.627 - 3.650: 15.6525% ( 1136) 00:12:09.735 3.650 - 3.674: 23.1382% ( 978) 00:12:09.735 3.674 - 3.698: 30.6391% ( 980) 00:12:09.735 3.698 - 3.721: 39.6403% ( 1176) 00:12:09.735 3.721 - 3.745: 47.6081% ( 1041) 00:12:09.735 3.745 - 3.769: 54.9024% ( 953) 00:12:09.735 3.769 - 3.793: 59.7704% ( 636) 00:12:09.735 3.793 - 3.816: 64.1791% ( 576) 00:12:09.735 3.816 - 3.840: 67.6158% ( 449) 00:12:09.735 3.840 - 3.864: 71.4887% ( 506) 00:12:09.735 3.864 - 3.887: 75.1856% ( 483) 00:12:09.735 3.887 - 3.911: 79.0356% ( 503) 00:12:09.735 3.911 - 3.935: 82.7708% ( 488) 00:12:09.735 3.935 - 3.959: 85.5951% ( 369) 00:12:09.735 3.959 - 3.982: 87.7000% ( 275) 00:12:09.735 3.982 - 4.006: 89.6441% ( 254) 00:12:09.735 4.006 - 4.030: 91.0677% ( 186) 00:12:09.735 4.030 - 4.053: 92.4837% ( 185) 00:12:09.735 4.053 - 4.077: 93.6778% ( 156) 00:12:09.735 4.077 - 4.101: 94.5809% ( 118) 00:12:09.735 4.101 - 4.124: 95.3617% ( 102) 00:12:09.735 4.124 - 4.148: 95.9281% ( 74) 00:12:09.735 4.148 - 4.172: 96.3337% ( 53) 00:12:09.735 4.172 - 4.196: 96.6016% ( 35) 00:12:09.735 4.196 - 4.219: 96.7700% ( 22) 00:12:09.735 4.219 - 4.243: 96.8772% ( 14) 00:12:09.735 4.243 - 4.267: 96.9690% ( 12) 00:12:09.735 4.267 - 4.290: 97.0073% ( 5) 00:12:09.735 4.290 - 4.314: 97.1068% ( 13) 00:12:09.735 4.314 - 4.338: 97.2139% ( 14) 00:12:09.735 4.338 - 4.361: 97.3364% ( 16) 00:12:09.735 4.361 - 4.385: 97.4053% ( 9) 00:12:09.735 4.385 - 4.409: 97.5277% ( 16) 00:12:09.735 4.409 - 4.433: 97.5966% ( 9) 00:12:09.735 4.433 - 4.456: 97.6119% ( 2) 00:12:09.735 4.480 - 4.504: 97.6426% ( 4) 00:12:09.735 4.527 - 4.551: 97.6502% ( 1) 00:12:09.735 4.551 - 4.575: 97.6655% ( 2) 00:12:09.735 4.599 - 4.622: 97.6808% ( 2) 00:12:09.735 4.622 - 4.646: 97.7038% ( 3) 00:12:09.735 4.717 - 4.741: 97.7191% ( 2) 00:12:09.735 4.741 - 4.764: 97.7268% ( 1) 00:12:09.735 4.764 - 4.788: 97.7344% ( 1) 00:12:09.735 4.788 - 4.812: 97.7803% ( 6) 00:12:09.735 4.812 - 4.836: 97.8033% ( 3) 00:12:09.735 4.836 - 4.859: 97.8186% ( 2) 00:12:09.735 4.859 - 4.883: 97.8722% ( 7) 00:12:09.735 4.883 - 4.907: 97.9104% ( 5) 00:12:09.735 4.907 - 4.930: 97.9334% ( 3) 00:12:09.735 4.930 - 4.954: 97.9640% ( 4) 00:12:09.735 4.954 - 4.978: 98.0023% ( 5) 00:12:09.735 4.978 - 5.001: 98.0100% ( 1) 00:12:09.735 5.001 - 5.025: 98.0482% ( 5) 00:12:09.735 5.025 - 5.049: 98.0788% ( 4) 00:12:09.735 5.049 - 5.073: 98.1018% ( 3) 00:12:09.735 5.073 - 5.096: 98.1248% ( 3) 00:12:09.735 5.096 - 5.120: 98.1630% ( 5) 00:12:09.735 5.120 - 5.144: 98.1936% ( 4) 00:12:09.735 5.144 - 5.167: 98.2243% ( 4) 00:12:09.735 5.167 - 5.191: 98.2396% ( 2) 00:12:09.735 5.191 - 5.215: 98.2549% ( 2) 00:12:09.735 5.215 - 5.239: 98.2931% ( 5) 00:12:09.735 5.239 - 5.262: 98.3008% ( 1) 00:12:09.735 5.262 - 5.286: 98.3314% ( 4) 00:12:09.735 5.286 - 5.310: 98.3467% ( 2) 00:12:09.735 5.310 - 5.333: 98.3620% ( 2) 00:12:09.735 5.333 - 5.357: 98.3773% ( 2) 00:12:09.735 5.357 - 5.381: 98.3927% ( 2) 00:12:09.735 5.381 - 5.404: 98.4080% ( 2) 00:12:09.735 5.404 - 5.428: 98.4156% ( 1) 00:12:09.735 5.476 - 5.499: 98.4386% ( 3) 00:12:09.735 5.499 - 5.523: 98.4462% ( 1) 00:12:09.735 5.547 - 5.570: 98.4539% ( 1) 00:12:09.735 5.570 - 5.594: 98.4615% ( 1) 00:12:09.735 5.760 - 5.784: 98.4692% ( 1) 00:12:09.735 5.784 - 5.807: 98.4768% ( 1) 00:12:09.735 5.807 - 5.831: 98.4922% ( 2) 00:12:09.735 6.044 - 6.068: 98.4998% ( 1) 00:12:09.735 6.068 - 6.116: 98.5075% ( 1) 00:12:09.735 6.116 - 6.163: 98.5151% ( 1) 00:12:09.735 6.210 - 6.258: 98.5304% ( 2) 00:12:09.735 6.400 - 6.447: 98.5381% ( 1) 00:12:09.735 6.447 - 6.495: 98.5457% ( 1) 00:12:09.735 6.542 - 6.590: 98.5534% ( 1) 00:12:09.735 6.590 - 6.637: 98.5610% ( 1) 00:12:09.735 6.874 - 6.921: 98.5687% ( 1) 00:12:09.735 6.921 - 6.969: 98.5763% ( 1) 00:12:09.735 7.159 - 7.206: 98.5840% ( 1) 00:12:09.735 7.348 - 7.396: 98.5917% ( 1) 00:12:09.735 7.443 - 7.490: 98.6146% ( 3) 00:12:09.735 7.490 - 7.538: 98.6223% ( 1) 00:12:09.735 7.585 - 7.633: 98.6299% ( 1) 00:12:09.735 7.633 - 7.680: 98.6376% ( 1) 00:12:09.735 7.727 - 7.775: 98.6529% ( 2) 00:12:09.735 7.870 - 7.917: 98.6605% ( 1) 00:12:09.735 8.059 - 8.107: 98.6759% ( 2) 00:12:09.735 8.201 - 8.249: 98.6835% ( 1) 00:12:09.736 8.249 - 8.296: 98.7065% ( 3) 00:12:09.736 8.296 - 8.344: 98.7218% ( 2) 00:12:09.736 8.344 - 8.391: 98.7294% ( 1) 00:12:09.736 8.439 - 8.486: 98.7371% ( 1) 00:12:09.736 8.581 - 8.628: 98.7524% ( 2) 00:12:09.736 8.676 - 8.723: 98.7600% ( 1) 00:12:09.736 8.865 - 8.913: 98.7677% ( 1) 00:12:09.736 8.913 - 8.960: 98.7754% ( 1) 00:12:09.736 9.055 - 9.102: 98.7830% ( 1) 00:12:09.736 9.102 - 9.150: 98.7983% ( 2) 00:12:09.736 9.197 - 9.244: 98.8060% ( 1) 00:12:09.736 9.719 - 9.766: 98.8136% ( 1) 00:12:09.736 9.813 - 9.861: 98.8289% ( 2) 00:12:09.736 10.145 - 10.193: 98.8442% ( 2) 00:12:09.736 10.667 - 10.714: 98.8519% ( 1) 00:12:09.736 10.856 - 10.904: 98.8595% ( 1) 00:12:09.736 11.093 - 11.141: 98.8672% ( 1) 00:12:09.736 11.567 - 11.615: 98.8749% ( 1) 00:12:09.736 11.662 - 11.710: 98.8825% ( 1) 00:12:09.736 11.899 - 11.947: 98.8902% ( 1) 00:12:09.736 12.136 - 12.231: 98.8978% ( 1) 00:12:09.736 12.231 - 12.326: 98.9055% ( 1) 00:12:09.736 12.326 - 12.421: 98.9208% ( 2) 00:12:09.736 12.421 - 12.516: 98.9284% ( 1) 00:12:09.736 12.990 - 13.084: 98.9361% ( 1) 00:12:09.736 13.274 - 13.369: 98.9437% ( 1) 00:12:09.736 13.464 - 13.559: 98.9514% ( 1) 00:12:09.736 13.843 - 13.938: 98.9591% ( 1) 00:12:09.736 14.127 - 14.222: 98.9667% ( 1) 00:12:09.736 14.412 - 14.507: 98.9744% ( 1) 00:12:09.736 14.507 - 14.601: 98.9820% ( 1) 00:12:09.736 14.601 - 14.696: 98.9897% ( 1) 00:12:09.736 15.076 - 15.170: 99.0050% ( 2) 00:12:09.736 15.170 - 15.265: 99.0126% ( 1) 00:12:09.736 16.403 - 16.498: 99.0203% ( 1) 00:12:09.736 16.782 - 16.877: 99.0279% ( 1) 00:12:09.736 17.067 - 17.161: 99.0432% ( 2) 00:12:09.736 17.256 - 17.351: 99.0586% ( 2) 00:12:09.736 17.446 - 17.541: 99.1121% ( 7) 00:12:09.736 17.541 - 17.636: 99.1351% ( 3) 00:12:09.736 17.636 - 17.730: 99.1810% ( 6) 00:12:09.736 17.730 - 17.825: 99.2193% ( 5) 00:12:09.736 17.825 - 17.920: 99.2652% ( 6) 00:12:09.736 17.920 - 18.015: 99.3341% ( 9) 00:12:09.736 18.015 - 18.110: 99.3724% ( 5) 00:12:09.736 18.110 - 18.204: 99.4183% ( 6) 00:12:09.736 18.204 - 18.299: 99.4336% ( 2) 00:12:09.736 18.299 - 18.394: 99.4719% ( 5) 00:12:09.736 18.394 - 18.489: 99.5561% ( 11) 00:12:09.736 18.489 - 18.584: 99.6326% ( 10) 00:12:09.736 18.584 - 18.679: 99.6556% ( 3) 00:12:09.736 18.679 - 18.773: 99.6938% ( 5) 00:12:09.736 18.773 - 18.868: 99.7398% ( 6) 00:12:09.736 18.963 - 19.058: 99.7627% ( 3) 00:12:09.736 19.153 - 19.247: 99.7857% ( 3) 00:12:09.736 19.247 - 19.342: 99.7933% ( 1) 00:12:09.736 19.342 - 19.437: 99.8316% ( 5) 00:12:09.736 19.437 - 19.532: 99.8469% ( 2) 00:12:09.736 19.532 - 19.627: 99.8622% ( 2) 00:12:09.736 20.006 - 20.101: 99.8699% ( 1) 00:12:09.736 20.101 - 20.196: 99.8775% ( 1) 00:12:09.736 20.385 - 20.480: 99.8852% ( 1) 00:12:09.736 20.575 - 20.670: 99.8928% ( 1) 00:12:09.736 21.239 - 21.333: 99.9005% ( 1) 00:12:09.736 24.652 - 24.841: 99.9082% ( 1) 00:12:09.736 3980.705 - 4004.978: 99.9617% ( 7) 00:12:09.736 4004.978 - 4029.250: 100.0000% ( 5) 00:12:09.736 00:12:09.736 Complete histogram 00:12:09.736 ================== 00:12:09.736 Range in us Cumulative Count 00:12:09.736 2.050 - 2.062: 0.0612% ( 8) 00:12:09.736 2.062 - 2.074: 15.5760% ( 2027) 00:12:09.736 2.074 - 2.086: 26.6590% ( 1448) 00:12:09.736 2.086 - 2.098: 30.3406% ( 481) 00:12:09.736 2.098 - 2.110: 53.9763% ( 3088) 00:12:09.736 2.110 - 2.121: 60.6353% ( 870) 00:12:09.736 2.121 - 2.133: 63.1152% ( 324) 00:12:09.736 2.133 - 2.145: 71.3356% ( 1074) 00:12:09.736 2.145 - 2.157: 73.8308% ( 326) 00:12:09.736 2.157 - 2.169: 77.4818% ( 477) 00:12:09.736 2.169 - 2.181: 85.9242% ( 1103) 00:12:09.736 2.181 - 2.193: 87.8607% ( 253) 00:12:09.736 2.193 - 2.204: 88.6950% ( 109) 00:12:09.736 2.204 - 2.216: 90.1263% ( 187) 00:12:09.736 2.216 - 2.228: 91.4811% ( 177) 00:12:09.736 2.228 - 2.240: 92.4761% ( 130) 00:12:09.736 2.240 - 2.252: 93.7926% ( 172) 00:12:09.736 2.252 - 2.264: 94.6269% ( 109) 00:12:09.736 2.264 - 2.276: 95.0096% ( 50) 00:12:09.736 2.276 - 2.287: 95.1933% ( 24) 00:12:09.736 2.287 - 2.299: 95.4841% ( 38) 00:12:09.736 2.299 - 2.311: 95.6678% ( 24) 00:12:09.736 2.311 - 2.323: 95.7597% ( 12) 00:12:09.736 2.323 - 2.335: 95.8898% ( 17) 00:12:09.736 2.335 - 2.347: 95.9969% ( 14) 00:12:09.736 2.347 - 2.359: 96.0658% ( 9) 00:12:09.736 2.359 - 2.370: 96.1653% ( 13) 00:12:09.736 2.370 - 2.382: 96.3490% ( 24) 00:12:09.736 2.382 - 2.394: 96.5557% ( 27) 00:12:09.736 2.394 - 2.406: 96.8695% ( 41) 00:12:09.736 2.406 - 2.418: 97.0991% ( 30) 00:12:09.736 2.418 - 2.430: 97.2752% ( 23) 00:12:09.736 2.430 - 2.441: 97.6655% ( 51) 00:12:09.736 2.441 - 2.453: 97.8339% ( 22) 00:12:09.736 2.453 - 2.465: 98.0406% ( 27) 00:12:09.736 2.465 - 2.477: 98.2166% ( 23) 00:12:09.736 2.477 - 2.489: 98.2625% ( 6) 00:12:09.736 2.489 - 2.501: 98.3544% ( 12) 00:12:09.736 2.501 - 2.513: 98.4309% ( 10) 00:12:09.736 2.513 - 2.524: 98.4692% ( 5) 00:12:09.736 2.524 - 2.536: 98.4845% ( 2) 00:12:09.736 2.536 - 2.548: 98.4922% ( 1) 00:12:09.736 2.548 - 2.560: 98.5075% ( 2) 00:12:09.736 2.560 - 2.572: 98.5151% ( 1) 00:12:09.736 2.572 - 2.584: 98.5228% ( 1) 00:12:09.736 2.584 - 2.596: 98.5304% ( 1) 00:12:09.736 2.607 - 2.619: 98.5381% ( 1) 00:12:09.736 2.619 - 2.631: 98.5534% ( 2) 00:12:09.736 2.631 - 2.643: 98.5610% ( 1) 00:12:09.736 2.702 - 2.714: 98.5687% ( 1) 00:12:09.736 2.714 - 2.726: 98.5763% ( 1) 00:12:09.736 2.750 - 2.761: 98.5840% ( 1) 00:12:09.736 2.797 - 2.809: 98.5917% ( 1) 00:12:09.736 2.833 - 2.844: 98.5993% ( 1) 00:12:09.736 3.342 - 3.366: 98.6070% ( 1) 00:12:09.736 3.366 - 3.390: 98.6146% ( 1) 00:12:09.736 3.461 - 3.484: 98.6223% ( 1) 00:12:09.736 3.484 - 3.508: 98.6299% ( 1) 00:12:09.736 3.508 - 3.532: 98.6376% ( 1) 00:12:09.736 3.532 - 3.556: 98.6605% ( 3) 00:12:09.736 3.603 - 3.627: 98.6682% ( 1) 00:12:09.736 3.627 - 3.650: 98.6759% ( 1) 00:12:09.736 3.674 - 3.698: 98.6835% ( 1) 00:12:09.736 3.721 - 3.745: 98.6912% ( 1) 00:12:09.736 3.887 - 3.911: 98.7065% ( 2) 00:12:09.736 3.935 - 3.959: 98.7141% ( 1) 00:12:09.736 3.982 - 4.006: 98.7218% ( 1) 00:12:09.736 4.006 - 4.030: 98.7294% ( 1) 00:12:09.736 4.053 - 4.077: 98.7371% ( 1) 00:12:09.736 4.124 - 4.148: 98.7447% ( 1) 00:12:09.736 4.148 - 4.172: 98.7600% ( 2) 00:12:09.736 4.622 - 4.646: 98.7677% ( 1) 00:12:09.736 5.073 - 5.096: 98.7754% ( 1) 00:12:09.736 5.144 - 5.167: 98.7830% ( 1) 00:12:09.736 5.191 - 5.215: 98.7907% ( 1) 00:12:09.736 5.239 - 5.262: 98.7983% ( 1) 00:12:09.736 5.262 - 5.286: 98.8060% ( 1) 00:12:09.736 5.286 - 5.310: 98.8136% ( 1) 00:12:09.736 5.333 - 5.357: 98.8213% ( 1) 00:12:09.736 5.499 - 5.523: 98.8289% ( 1) 00:12:09.736 5.594 - 5.618: 98.8366% ( 1) 00:12:09.736 5.641 - 5.665: 98.8442% ( 1) 00:12:09.736 6.210 - 6.258: 98.8519% ( 1) 00:12:09.736 6.305 - 6.353: 98.8595% ( 1) 00:12:09.736 6.495 - 6.542: 9[2024-07-12 15:48:39.155047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:09.736 8.8749% ( 2) 00:12:09.736 6.637 - 6.684: 98.8825% ( 1) 00:12:09.736 6.684 - 6.732: 98.8902% ( 1) 00:12:09.736 6.874 - 6.921: 98.8978% ( 1) 00:12:09.736 6.921 - 6.969: 98.9055% ( 1) 00:12:09.736 7.538 - 7.585: 98.9131% ( 1) 00:12:09.736 7.870 - 7.917: 98.9208% ( 1) 00:12:09.736 8.059 - 8.107: 98.9284% ( 1) 00:12:09.736 15.360 - 15.455: 98.9361% ( 1) 00:12:09.736 15.644 - 15.739: 98.9437% ( 1) 00:12:09.736 15.739 - 15.834: 98.9591% ( 2) 00:12:09.736 15.929 - 16.024: 98.9820% ( 3) 00:12:09.736 16.024 - 16.119: 99.0126% ( 4) 00:12:09.736 16.119 - 16.213: 99.0509% ( 5) 00:12:09.736 16.213 - 16.308: 99.0892% ( 5) 00:12:09.736 16.308 - 16.403: 99.1045% ( 2) 00:12:09.736 16.498 - 16.593: 99.1274% ( 3) 00:12:09.736 16.593 - 16.687: 99.1504% ( 3) 00:12:09.736 16.687 - 16.782: 99.1581% ( 1) 00:12:09.736 16.782 - 16.877: 99.2116% ( 7) 00:12:09.736 16.877 - 16.972: 99.2269% ( 2) 00:12:09.736 16.972 - 17.067: 99.2499% ( 3) 00:12:09.736 17.067 - 17.161: 99.2729% ( 3) 00:12:09.736 17.161 - 17.256: 99.3111% ( 5) 00:12:09.736 17.351 - 17.446: 99.3418% ( 4) 00:12:09.736 17.636 - 17.730: 99.3494% ( 1) 00:12:09.736 17.825 - 17.920: 99.3571% ( 1) 00:12:09.736 18.015 - 18.110: 99.3877% ( 4) 00:12:09.736 18.110 - 18.204: 99.4030% ( 2) 00:12:09.736 18.204 - 18.299: 99.4106% ( 1) 00:12:09.736 18.394 - 18.489: 99.4259% ( 2) 00:12:09.736 18.489 - 18.584: 99.4336% ( 1) 00:12:09.736 18.868 - 18.963: 99.4413% ( 1) 00:12:09.736 19.153 - 19.247: 99.4489% ( 1) 00:12:09.736 21.428 - 21.523: 99.4566% ( 1) 00:12:09.736 3980.705 - 4004.978: 99.7627% ( 40) 00:12:09.736 4004.978 - 4029.250: 100.0000% ( 31) 00:12:09.736 00:12:09.736 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:09.736 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:09.736 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:09.737 [ 00:12:09.737 { 00:12:09.737 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:09.737 "subtype": "Discovery", 00:12:09.737 "listen_addresses": [], 00:12:09.737 "allow_any_host": true, 00:12:09.737 "hosts": [] 00:12:09.737 }, 00:12:09.737 { 00:12:09.737 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:09.737 "subtype": "NVMe", 00:12:09.737 "listen_addresses": [ 00:12:09.737 { 00:12:09.737 "trtype": "VFIOUSER", 00:12:09.737 "adrfam": "IPv4", 00:12:09.737 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:09.737 "trsvcid": "0" 00:12:09.737 } 00:12:09.737 ], 00:12:09.737 "allow_any_host": true, 00:12:09.737 "hosts": [], 00:12:09.737 "serial_number": "SPDK1", 00:12:09.737 "model_number": "SPDK bdev Controller", 00:12:09.737 "max_namespaces": 32, 00:12:09.737 "min_cntlid": 1, 00:12:09.737 "max_cntlid": 65519, 00:12:09.737 "namespaces": [ 00:12:09.737 { 00:12:09.737 "nsid": 1, 00:12:09.737 "bdev_name": "Malloc1", 00:12:09.737 "name": "Malloc1", 00:12:09.737 "nguid": "68BF07FE5FB04124BAD5FDAEB80F808B", 00:12:09.737 "uuid": "68bf07fe-5fb0-4124-bad5-fdaeb80f808b" 00:12:09.737 }, 00:12:09.737 { 00:12:09.737 "nsid": 2, 00:12:09.737 "bdev_name": "Malloc3", 00:12:09.737 "name": "Malloc3", 00:12:09.737 "nguid": "6A0E935EEF3D46969AE794EF1D8BE1D9", 00:12:09.737 "uuid": "6a0e935e-ef3d-4696-9ae7-94ef1d8be1d9" 00:12:09.737 } 00:12:09.737 ] 00:12:09.737 }, 00:12:09.737 { 00:12:09.737 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:09.737 "subtype": "NVMe", 00:12:09.737 "listen_addresses": [ 00:12:09.737 { 00:12:09.737 "trtype": "VFIOUSER", 00:12:09.737 "adrfam": "IPv4", 00:12:09.737 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:09.737 "trsvcid": "0" 00:12:09.737 } 00:12:09.737 ], 00:12:09.737 "allow_any_host": true, 00:12:09.737 "hosts": [], 00:12:09.737 "serial_number": "SPDK2", 00:12:09.737 "model_number": "SPDK bdev Controller", 00:12:09.737 "max_namespaces": 32, 00:12:09.737 "min_cntlid": 1, 00:12:09.737 "max_cntlid": 65519, 00:12:09.737 "namespaces": [ 00:12:09.737 { 00:12:09.737 "nsid": 1, 00:12:09.737 "bdev_name": "Malloc2", 00:12:09.737 "name": "Malloc2", 00:12:09.737 "nguid": "D75C092CBA084CB1A22D1177BE9468EA", 00:12:09.737 "uuid": "d75c092c-ba08-4cb1-a22d-1177be9468ea" 00:12:09.737 } 00:12:09.737 ] 00:12:09.737 } 00:12:09.737 ] 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4172562 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:09.737 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:09.994 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.994 [2024-07-12 15:48:39.599896] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:09.994 Malloc4 00:12:10.251 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:10.251 [2024-07-12 15:48:39.953477] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:10.251 15:48:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:10.508 Asynchronous Event Request test 00:12:10.508 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:10.508 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:10.508 Registering asynchronous event callbacks... 00:12:10.508 Starting namespace attribute notice tests for all controllers... 00:12:10.508 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:10.508 aer_cb - Changed Namespace 00:12:10.508 Cleaning up... 00:12:10.508 [ 00:12:10.508 { 00:12:10.508 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:10.508 "subtype": "Discovery", 00:12:10.508 "listen_addresses": [], 00:12:10.508 "allow_any_host": true, 00:12:10.508 "hosts": [] 00:12:10.508 }, 00:12:10.508 { 00:12:10.508 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:10.508 "subtype": "NVMe", 00:12:10.508 "listen_addresses": [ 00:12:10.508 { 00:12:10.508 "trtype": "VFIOUSER", 00:12:10.508 "adrfam": "IPv4", 00:12:10.508 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:10.508 "trsvcid": "0" 00:12:10.508 } 00:12:10.508 ], 00:12:10.508 "allow_any_host": true, 00:12:10.508 "hosts": [], 00:12:10.508 "serial_number": "SPDK1", 00:12:10.508 "model_number": "SPDK bdev Controller", 00:12:10.508 "max_namespaces": 32, 00:12:10.508 "min_cntlid": 1, 00:12:10.508 "max_cntlid": 65519, 00:12:10.508 "namespaces": [ 00:12:10.508 { 00:12:10.508 "nsid": 1, 00:12:10.508 "bdev_name": "Malloc1", 00:12:10.508 "name": "Malloc1", 00:12:10.508 "nguid": "68BF07FE5FB04124BAD5FDAEB80F808B", 00:12:10.508 "uuid": "68bf07fe-5fb0-4124-bad5-fdaeb80f808b" 00:12:10.508 }, 00:12:10.508 { 00:12:10.508 "nsid": 2, 00:12:10.508 "bdev_name": "Malloc3", 00:12:10.508 "name": "Malloc3", 00:12:10.508 "nguid": "6A0E935EEF3D46969AE794EF1D8BE1D9", 00:12:10.508 "uuid": "6a0e935e-ef3d-4696-9ae7-94ef1d8be1d9" 00:12:10.508 } 00:12:10.508 ] 00:12:10.508 }, 00:12:10.508 { 00:12:10.508 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:10.508 "subtype": "NVMe", 00:12:10.508 "listen_addresses": [ 00:12:10.508 { 00:12:10.508 "trtype": "VFIOUSER", 00:12:10.508 "adrfam": "IPv4", 00:12:10.508 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:10.508 "trsvcid": "0" 00:12:10.508 } 00:12:10.508 ], 00:12:10.508 "allow_any_host": true, 00:12:10.508 "hosts": [], 00:12:10.508 "serial_number": "SPDK2", 00:12:10.508 "model_number": "SPDK bdev Controller", 00:12:10.508 "max_namespaces": 32, 00:12:10.508 "min_cntlid": 1, 00:12:10.508 "max_cntlid": 65519, 00:12:10.508 "namespaces": [ 00:12:10.508 { 00:12:10.508 "nsid": 1, 00:12:10.508 "bdev_name": "Malloc2", 00:12:10.508 "name": "Malloc2", 00:12:10.508 "nguid": "D75C092CBA084CB1A22D1177BE9468EA", 00:12:10.508 "uuid": "d75c092c-ba08-4cb1-a22d-1177be9468ea" 00:12:10.508 }, 00:12:10.508 { 00:12:10.508 "nsid": 2, 00:12:10.508 "bdev_name": "Malloc4", 00:12:10.508 "name": "Malloc4", 00:12:10.508 "nguid": "EBD75496A031434FA364CA0615735CA0", 00:12:10.508 "uuid": "ebd75496-a031-434f-a364-ca0615735ca0" 00:12:10.508 } 00:12:10.508 ] 00:12:10.508 } 00:12:10.508 ] 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4172562 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4166339 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4166339 ']' 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4166339 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4166339 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4166339' 00:12:10.764 killing process with pid 4166339 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4166339 00:12:10.764 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4166339 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4172704 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4172704' 00:12:11.021 Process pid: 4172704 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4172704 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4172704 ']' 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.021 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.022 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.022 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.022 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:11.022 [2024-07-12 15:48:40.685884] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:11.022 [2024-07-12 15:48:40.686923] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:12:11.022 [2024-07-12 15:48:40.686990] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.022 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.022 [2024-07-12 15:48:40.746223] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.280 [2024-07-12 15:48:40.849036] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.280 [2024-07-12 15:48:40.849088] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.280 [2024-07-12 15:48:40.849102] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.280 [2024-07-12 15:48:40.849113] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.280 [2024-07-12 15:48:40.849122] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.280 [2024-07-12 15:48:40.849341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.280 [2024-07-12 15:48:40.849406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.280 [2024-07-12 15:48:40.849488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:11.280 [2024-07-12 15:48:40.849490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.280 [2024-07-12 15:48:40.946464] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:11.280 [2024-07-12 15:48:40.946649] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:11.280 [2024-07-12 15:48:40.946983] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:11.280 [2024-07-12 15:48:40.947576] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:11.280 [2024-07-12 15:48:40.947822] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:11.280 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.280 15:48:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:11.280 15:48:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:12.651 15:48:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:12.651 15:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:12.651 15:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:12.651 15:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:12.651 15:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:12.651 15:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:12.909 Malloc1 00:12:12.909 15:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:13.166 15:48:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:13.424 15:48:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:13.680 15:48:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:13.680 15:48:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:13.680 15:48:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:13.938 Malloc2 00:12:13.938 15:48:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:14.194 15:48:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4172704 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4172704 ']' 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4172704 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4172704 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4172704' 00:12:14.763 killing process with pid 4172704 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4172704 00:12:14.763 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4172704 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:15.380 00:12:15.380 real 0m52.764s 00:12:15.380 user 3m28.110s 00:12:15.380 sys 0m4.577s 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:15.380 ************************************ 00:12:15.380 END TEST nvmf_vfio_user 00:12:15.380 ************************************ 00:12:15.380 15:48:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:15.380 15:48:44 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:15.380 15:48:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:15.380 15:48:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.380 15:48:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:15.380 ************************************ 00:12:15.380 START TEST nvmf_vfio_user_nvme_compliance 00:12:15.380 ************************************ 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:15.380 * Looking for test storage... 00:12:15.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.380 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4173310 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4173310' 00:12:15.381 Process pid: 4173310 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4173310 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 4173310 ']' 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:15.381 15:48:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:15.381 [2024-07-12 15:48:44.955084] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:12:15.381 [2024-07-12 15:48:44.955162] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.381 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.381 [2024-07-12 15:48:45.016029] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:15.639 [2024-07-12 15:48:45.123250] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.639 [2024-07-12 15:48:45.123297] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.639 [2024-07-12 15:48:45.123334] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.639 [2024-07-12 15:48:45.123345] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.639 [2024-07-12 15:48:45.123361] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.639 [2024-07-12 15:48:45.123501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.639 [2024-07-12 15:48:45.123528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.639 [2024-07-12 15:48:45.123533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.639 15:48:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.639 15:48:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:15.639 15:48:45 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.572 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:16.830 malloc0 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.830 15:48:46 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:16.830 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.830 00:12:16.830 00:12:16.830 CUnit - A unit testing framework for C - Version 2.1-3 00:12:16.830 http://cunit.sourceforge.net/ 00:12:16.830 00:12:16.830 00:12:16.830 Suite: nvme_compliance 00:12:16.830 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 15:48:46.488952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:16.830 [2024-07-12 15:48:46.490434] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:16.830 [2024-07-12 15:48:46.490460] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:16.830 [2024-07-12 15:48:46.490473] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:16.830 [2024-07-12 15:48:46.491971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:16.830 passed 00:12:17.088 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 15:48:46.576569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.088 [2024-07-12 15:48:46.579596] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.088 passed 00:12:17.088 Test: admin_identify_ns ...[2024-07-12 15:48:46.669216] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.088 [2024-07-12 15:48:46.726345] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:17.088 [2024-07-12 15:48:46.734345] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:17.088 [2024-07-12 15:48:46.755458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.088 passed 00:12:17.346 Test: admin_get_features_mandatory_features ...[2024-07-12 15:48:46.843609] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.346 [2024-07-12 15:48:46.846646] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.346 passed 00:12:17.346 Test: admin_get_features_optional_features ...[2024-07-12 15:48:46.931179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.346 [2024-07-12 15:48:46.934200] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.346 passed 00:12:17.346 Test: admin_set_features_number_of_queues ...[2024-07-12 15:48:47.017881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.603 [2024-07-12 15:48:47.122454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.603 passed 00:12:17.603 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 15:48:47.208803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.603 [2024-07-12 15:48:47.211824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.603 passed 00:12:17.603 Test: admin_get_log_page_with_lpo ...[2024-07-12 15:48:47.294809] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.861 [2024-07-12 15:48:47.362348] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:17.861 [2024-07-12 15:48:47.375414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.861 passed 00:12:17.861 Test: fabric_property_get ...[2024-07-12 15:48:47.459255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.861 [2024-07-12 15:48:47.460559] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:17.861 [2024-07-12 15:48:47.462278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.861 passed 00:12:17.861 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 15:48:47.548854] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:17.861 [2024-07-12 15:48:47.550155] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:17.861 [2024-07-12 15:48:47.551877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:17.861 passed 00:12:18.118 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 15:48:47.634870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.118 [2024-07-12 15:48:47.719353] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:18.118 [2024-07-12 15:48:47.735358] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:18.118 [2024-07-12 15:48:47.740451] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.118 passed 00:12:18.118 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 15:48:47.824174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.118 [2024-07-12 15:48:47.825515] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:18.118 [2024-07-12 15:48:47.827196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.375 passed 00:12:18.375 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 15:48:47.911507] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.375 [2024-07-12 15:48:47.986328] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:18.375 [2024-07-12 15:48:48.010327] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:18.375 [2024-07-12 15:48:48.015451] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.375 passed 00:12:18.375 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 15:48:48.101748] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.375 [2024-07-12 15:48:48.103089] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:18.375 [2024-07-12 15:48:48.103148] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:18.632 [2024-07-12 15:48:48.104777] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.633 passed 00:12:18.633 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 15:48:48.191145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.633 [2024-07-12 15:48:48.277329] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:18.633 [2024-07-12 15:48:48.285327] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:18.633 [2024-07-12 15:48:48.293326] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:18.633 [2024-07-12 15:48:48.301325] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:18.633 [2024-07-12 15:48:48.330437] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.633 passed 00:12:18.890 Test: admin_create_io_sq_verify_pc ...[2024-07-12 15:48:48.412258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:18.890 [2024-07-12 15:48:48.427350] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:18.890 [2024-07-12 15:48:48.444618] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:18.890 passed 00:12:18.890 Test: admin_create_io_qp_max_qps ...[2024-07-12 15:48:48.528169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:20.262 [2024-07-12 15:48:49.633344] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:20.520 [2024-07-12 15:48:50.022566] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:20.520 passed 00:12:20.520 Test: admin_create_io_sq_shared_cq ...[2024-07-12 15:48:50.110066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:20.520 [2024-07-12 15:48:50.241325] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:20.778 [2024-07-12 15:48:50.278409] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:20.778 passed 00:12:20.778 00:12:20.778 Run Summary: Type Total Ran Passed Failed Inactive 00:12:20.778 suites 1 1 n/a 0 0 00:12:20.778 tests 18 18 18 0 0 00:12:20.778 asserts 360 360 360 0 n/a 00:12:20.778 00:12:20.778 Elapsed time = 1.572 seconds 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4173310 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 4173310 ']' 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 4173310 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4173310 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4173310' 00:12:20.778 killing process with pid 4173310 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 4173310 00:12:20.778 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 4173310 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:21.037 00:12:21.037 real 0m5.806s 00:12:21.037 user 0m16.229s 00:12:21.037 sys 0m0.554s 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:21.037 ************************************ 00:12:21.037 END TEST nvmf_vfio_user_nvme_compliance 00:12:21.037 ************************************ 00:12:21.037 15:48:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:21.037 15:48:50 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:21.037 15:48:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:21.037 15:48:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.037 15:48:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:21.037 ************************************ 00:12:21.037 START TEST nvmf_vfio_user_fuzz 00:12:21.037 ************************************ 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:21.037 * Looking for test storage... 00:12:21.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.037 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.295 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.295 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.295 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.295 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4174028 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4174028' 00:12:21.296 Process pid: 4174028 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4174028 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4174028 ']' 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:21.296 15:48:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:21.554 15:48:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.554 15:48:51 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:21.554 15:48:51 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 malloc0 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:22.488 15:48:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:54.549 Fuzzing completed. Shutting down the fuzz application 00:12:54.549 00:12:54.549 Dumping successful admin opcodes: 00:12:54.549 8, 9, 10, 24, 00:12:54.549 Dumping successful io opcodes: 00:12:54.549 0, 00:12:54.549 NS: 0x200003a1ef00 I/O qp, Total commands completed: 714687, total successful commands: 2784, random_seed: 3121417216 00:12:54.549 NS: 0x200003a1ef00 admin qp, Total commands completed: 142774, total successful commands: 1160, random_seed: 3158463104 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4174028 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4174028 ']' 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 4174028 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4174028 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4174028' 00:12:54.549 killing process with pid 4174028 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 4174028 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 4174028 00:12:54.549 15:49:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:54.549 15:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:54.549 00:12:54.549 real 0m32.312s 00:12:54.549 user 0m33.468s 00:12:54.549 sys 0m27.267s 00:12:54.549 15:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:54.549 15:49:23 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:54.549 ************************************ 00:12:54.549 END TEST nvmf_vfio_user_fuzz 00:12:54.549 ************************************ 00:12:54.549 15:49:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:54.549 15:49:23 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:54.549 15:49:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:54.549 15:49:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.549 15:49:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.549 ************************************ 00:12:54.549 START TEST nvmf_host_management 00:12:54.549 ************************************ 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:54.549 * Looking for test storage... 00:12:54.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.549 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.550 15:49:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.485 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:55.486 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:55.486 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:55.486 Found net devices under 0000:09:00.0: cvl_0_0 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:55.486 Found net devices under 0000:09:00.1: cvl_0_1 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:55.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:12:55.486 00:12:55.486 --- 10.0.0.2 ping statistics --- 00:12:55.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.486 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:55.486 00:12:55.486 --- 10.0.0.1 ping statistics --- 00:12:55.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.486 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4179353 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4179353 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4179353 ']' 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.486 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:55.744 [2024-07-12 15:49:25.250721] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:12:55.744 [2024-07-12 15:49:25.250810] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.744 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.744 [2024-07-12 15:49:25.319035] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.744 [2024-07-12 15:49:25.430352] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.744 [2024-07-12 15:49:25.430423] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.744 [2024-07-12 15:49:25.430436] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.744 [2024-07-12 15:49:25.430448] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.745 [2024-07-12 15:49:25.430473] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.745 [2024-07-12 15:49:25.430527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.745 [2024-07-12 15:49:25.430589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.745 [2024-07-12 15:49:25.430653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:55.745 [2024-07-12 15:49:25.430656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.002 [2024-07-12 15:49:25.601165] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.002 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.002 Malloc0 00:12:56.002 [2024-07-12 15:49:25.667107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4179405 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4179405 /var/tmp/bdevperf.sock 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4179405 ']' 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:56.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:56.003 { 00:12:56.003 "params": { 00:12:56.003 "name": "Nvme$subsystem", 00:12:56.003 "trtype": "$TEST_TRANSPORT", 00:12:56.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:56.003 "adrfam": "ipv4", 00:12:56.003 "trsvcid": "$NVMF_PORT", 00:12:56.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:56.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:56.003 "hdgst": ${hdgst:-false}, 00:12:56.003 "ddgst": ${ddgst:-false} 00:12:56.003 }, 00:12:56.003 "method": "bdev_nvme_attach_controller" 00:12:56.003 } 00:12:56.003 EOF 00:12:56.003 )") 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:56.003 15:49:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:56.003 "params": { 00:12:56.003 "name": "Nvme0", 00:12:56.003 "trtype": "tcp", 00:12:56.003 "traddr": "10.0.0.2", 00:12:56.003 "adrfam": "ipv4", 00:12:56.003 "trsvcid": "4420", 00:12:56.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:56.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:56.003 "hdgst": false, 00:12:56.003 "ddgst": false 00:12:56.003 }, 00:12:56.003 "method": "bdev_nvme_attach_controller" 00:12:56.003 }' 00:12:56.295 [2024-07-12 15:49:25.749793] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:12:56.295 [2024-07-12 15:49:25.749881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179405 ] 00:12:56.295 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.295 [2024-07-12 15:49:25.817351] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.295 [2024-07-12 15:49:25.929366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.556 Running I/O for 10 seconds... 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:56.556 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:56.814 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:56.814 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:56.814 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:56.814 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:56.814 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.814 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:56.814 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.073 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.073 [2024-07-12 15:49:26.554309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.073 [2024-07-12 15:49:26.554875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.073 [2024-07-12 15:49:26.554888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.554903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.554917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.554931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.554945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.554960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.554974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.554988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.555971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.555985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.556000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.556014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.556029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.556043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.556058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.556072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.074 [2024-07-12 15:49:26.556087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.074 [2024-07-12 15:49:26.556101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.075 [2024-07-12 15:49:26.556116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.075 [2024-07-12 15:49:26.556130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.075 [2024-07-12 15:49:26.556146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.075 [2024-07-12 15:49:26.556160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.075 [2024-07-12 15:49:26.556175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.075 [2024-07-12 15:49:26.556189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.075 [2024-07-12 15:49:26.556204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.075 [2024-07-12 15:49:26.556222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.075 [2024-07-12 15:49:26.556237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.075 [2024-07-12 15:49:26.556251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.075 [2024-07-12 15:49:26.556267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:57.075 [2024-07-12 15:49:26.556281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:57.075 [2024-07-12 15:49:26.556388] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa76ca0 was disconnected and freed. reset controller. 00:12:57.075 [2024-07-12 15:49:26.557564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:57.075 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.075 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:57.075 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.075 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:57.075 task offset: 78592 on job bdev=Nvme0n1 fails 00:12:57.075 00:12:57.075 Latency(us) 00:12:57.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.075 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:57.075 Job: Nvme0n1 ended in about 0.40 seconds with error 00:12:57.075 Verification LBA range: start 0x0 length 0x400 00:12:57.075 Nvme0n1 : 0.40 1424.90 89.06 158.32 0.00 39288.49 2597.17 34952.53 00:12:57.075 =================================================================================================================== 00:12:57.075 Total : 1424.90 89.06 158.32 0.00 39288.49 2597.17 34952.53 00:12:57.075 [2024-07-12 15:49:26.559554] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:57.075 [2024-07-12 15:49:26.559594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x665980 (9): Bad file descriptor 00:12:57.075 15:49:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.075 15:49:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:57.075 [2024-07-12 15:49:26.701479] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4179405 00:12:58.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4179405) - No such process 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:58.008 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:58.009 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:58.009 { 00:12:58.009 "params": { 00:12:58.009 "name": "Nvme$subsystem", 00:12:58.009 "trtype": "$TEST_TRANSPORT", 00:12:58.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:58.009 "adrfam": "ipv4", 00:12:58.009 "trsvcid": "$NVMF_PORT", 00:12:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:58.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:58.009 "hdgst": ${hdgst:-false}, 00:12:58.009 "ddgst": ${ddgst:-false} 00:12:58.009 }, 00:12:58.009 "method": "bdev_nvme_attach_controller" 00:12:58.009 } 00:12:58.009 EOF 00:12:58.009 )") 00:12:58.009 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:58.009 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:58.009 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:58.009 15:49:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:58.009 "params": { 00:12:58.009 "name": "Nvme0", 00:12:58.009 "trtype": "tcp", 00:12:58.009 "traddr": "10.0.0.2", 00:12:58.009 "adrfam": "ipv4", 00:12:58.009 "trsvcid": "4420", 00:12:58.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:58.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:58.009 "hdgst": false, 00:12:58.009 "ddgst": false 00:12:58.009 }, 00:12:58.009 "method": "bdev_nvme_attach_controller" 00:12:58.009 }' 00:12:58.009 [2024-07-12 15:49:27.618579] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:12:58.009 [2024-07-12 15:49:27.618681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4179686 ] 00:12:58.009 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.009 [2024-07-12 15:49:27.679243] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.267 [2024-07-12 15:49:27.793248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.267 Running I/O for 1 seconds... 00:12:59.640 00:12:59.640 Latency(us) 00:12:59.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.640 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:59.640 Verification LBA range: start 0x0 length 0x400 00:12:59.640 Nvme0n1 : 1.01 1519.49 94.97 0.00 0.00 41462.95 10631.40 34175.81 00:12:59.640 =================================================================================================================== 00:12:59.640 Total : 1519.49 94.97 0.00 0.00 41462.95 10631.40 34175.81 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.640 rmmod nvme_tcp 00:12:59.640 rmmod nvme_fabrics 00:12:59.640 rmmod nvme_keyring 00:12:59.640 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4179353 ']' 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4179353 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4179353 ']' 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4179353 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4179353 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4179353' 00:12:59.641 killing process with pid 4179353 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4179353 00:12:59.641 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4179353 00:13:00.205 [2024-07-12 15:49:29.637181] app.c: 715:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.205 15:49:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.104 15:49:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:02.104 15:49:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:02.104 00:13:02.104 real 0m8.651s 00:13:02.104 user 0m19.453s 00:13:02.104 sys 0m2.661s 00:13:02.104 15:49:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:02.104 15:49:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:02.104 ************************************ 00:13:02.104 END TEST nvmf_host_management 00:13:02.104 ************************************ 00:13:02.104 15:49:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:02.104 15:49:31 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:02.104 15:49:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:02.104 15:49:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:02.104 15:49:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:02.104 ************************************ 00:13:02.104 START TEST nvmf_lvol 00:13:02.104 ************************************ 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:02.104 * Looking for test storage... 00:13:02.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.104 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:02.363 15:49:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:04.272 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:04.272 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:04.272 Found net devices under 0000:09:00.0: cvl_0_0 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:04.272 Found net devices under 0000:09:00.1: cvl_0_1 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:04.272 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:04.531 15:49:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:04.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:04.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:13:04.531 00:13:04.531 --- 10.0.0.2 ping statistics --- 00:13:04.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.531 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:04.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:04.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:13:04.531 00:13:04.531 --- 10.0.0.1 ping statistics --- 00:13:04.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:04.531 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4181879 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4181879 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4181879 ']' 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.531 15:49:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:04.531 [2024-07-12 15:49:34.211061] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:13:04.531 [2024-07-12 15:49:34.211152] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.531 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.789 [2024-07-12 15:49:34.277561] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.789 [2024-07-12 15:49:34.386546] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.789 [2024-07-12 15:49:34.386610] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.789 [2024-07-12 15:49:34.386624] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.789 [2024-07-12 15:49:34.386635] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.789 [2024-07-12 15:49:34.386645] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.789 [2024-07-12 15:49:34.386707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.789 [2024-07-12 15:49:34.386785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.789 [2024-07-12 15:49:34.386788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.723 15:49:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:05.723 15:49:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:05.723 15:49:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.723 15:49:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:05.723 15:49:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:05.723 15:49:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.723 15:49:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:05.981 [2024-07-12 15:49:35.488835] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.981 15:49:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:06.239 15:49:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:06.239 15:49:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:06.497 15:49:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:06.498 15:49:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:06.755 15:49:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:07.013 15:49:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a765f19a-3122-4db3-81c8-cc9c37f79da7 00:13:07.013 15:49:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a765f19a-3122-4db3-81c8-cc9c37f79da7 lvol 20 00:13:07.271 15:49:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=15d14c46-7eb4-4698-8b5d-594d0fc3da79 00:13:07.271 15:49:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:07.528 15:49:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 15d14c46-7eb4-4698-8b5d-594d0fc3da79 00:13:07.786 15:49:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:08.043 [2024-07-12 15:49:37.659759] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.044 15:49:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:08.301 15:49:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4182318 00:13:08.301 15:49:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:08.301 15:49:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:08.301 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.234 15:49:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 15d14c46-7eb4-4698-8b5d-594d0fc3da79 MY_SNAPSHOT 00:13:09.800 15:49:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a9098789-f3f4-481d-88b9-2cda63176994 00:13:09.800 15:49:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 15d14c46-7eb4-4698-8b5d-594d0fc3da79 30 00:13:10.058 15:49:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a9098789-f3f4-481d-88b9-2cda63176994 MY_CLONE 00:13:10.317 15:49:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=318d6009-2abf-4c65-9a3d-08ddc28bd07a 00:13:10.317 15:49:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 318d6009-2abf-4c65-9a3d-08ddc28bd07a 00:13:10.883 15:49:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4182318 00:13:19.022 Initializing NVMe Controllers 00:13:19.022 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:19.022 Controller IO queue size 128, less than required. 00:13:19.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:19.022 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:19.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:19.023 Initialization complete. Launching workers. 00:13:19.023 ======================================================== 00:13:19.023 Latency(us) 00:13:19.023 Device Information : IOPS MiB/s Average min max 00:13:19.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10615.80 41.47 12065.07 1565.87 88238.73 00:13:19.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10132.80 39.58 12633.32 2291.27 69651.30 00:13:19.023 ======================================================== 00:13:19.023 Total : 20748.60 81.05 12342.58 1565.87 88238.73 00:13:19.023 00:13:19.023 15:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:19.023 15:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 15d14c46-7eb4-4698-8b5d-594d0fc3da79 00:13:19.281 15:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a765f19a-3122-4db3-81c8-cc9c37f79da7 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:19.539 rmmod nvme_tcp 00:13:19.539 rmmod nvme_fabrics 00:13:19.539 rmmod nvme_keyring 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4181879 ']' 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4181879 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4181879 ']' 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4181879 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4181879 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4181879' 00:13:19.539 killing process with pid 4181879 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4181879 00:13:19.539 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4181879 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.797 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:22.333 00:13:22.333 real 0m19.763s 00:13:22.333 user 1m7.198s 00:13:22.333 sys 0m5.646s 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 ************************************ 00:13:22.333 END TEST nvmf_lvol 00:13:22.333 ************************************ 00:13:22.333 15:49:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:22.333 15:49:51 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:22.333 15:49:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:22.333 15:49:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.333 15:49:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.333 ************************************ 00:13:22.333 START TEST nvmf_lvs_grow 00:13:22.333 ************************************ 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:22.333 * Looking for test storage... 00:13:22.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:22.333 15:49:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:24.237 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:24.237 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:24.237 Found net devices under 0000:09:00.0: cvl_0_0 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:24.237 Found net devices under 0000:09:00.1: cvl_0_1 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:24.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:13:24.237 00:13:24.237 --- 10.0.0.2 ping statistics --- 00:13:24.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.237 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:13:24.237 00:13:24.237 --- 10.0.0.1 ping statistics --- 00:13:24.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.237 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:24.237 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4185588 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4185588 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4185588 ']' 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.494 15:49:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.494 [2024-07-12 15:49:54.011191] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:13:24.494 [2024-07-12 15:49:54.011266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.494 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.494 [2024-07-12 15:49:54.073865] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.494 [2024-07-12 15:49:54.180634] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.494 [2024-07-12 15:49:54.180698] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.494 [2024-07-12 15:49:54.180725] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.494 [2024-07-12 15:49:54.180736] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.494 [2024-07-12 15:49:54.180745] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.494 [2024-07-12 15:49:54.180772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.751 15:49:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.751 15:49:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:24.751 15:49:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:24.751 15:49:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.751 15:49:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:24.751 15:49:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.751 15:49:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:25.007 [2024-07-12 15:49:54.537799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:25.007 ************************************ 00:13:25.007 START TEST lvs_grow_clean 00:13:25.007 ************************************ 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:25.007 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:25.264 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:25.264 15:49:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:25.521 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7488ca6e-e31c-469a-9935-419e60963ebc 00:13:25.521 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:25.521 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:25.777 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:25.777 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:25.777 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7488ca6e-e31c-469a-9935-419e60963ebc lvol 150 00:13:26.033 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f9fcccab-6549-4c45-8b6b-4d5e2ac04750 00:13:26.033 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:26.033 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:26.290 [2024-07-12 15:49:55.875428] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:26.290 [2024-07-12 15:49:55.875529] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:26.290 true 00:13:26.290 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:26.290 15:49:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:26.546 15:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:26.546 15:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:26.803 15:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9fcccab-6549-4c45-8b6b-4d5e2ac04750 00:13:27.060 15:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:27.317 [2024-07-12 15:49:56.898591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.317 15:49:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4186020 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4186020 /var/tmp/bdevperf.sock 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4186020 ']' 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:27.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.574 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:27.574 [2024-07-12 15:49:57.250497] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:13:27.574 [2024-07-12 15:49:57.250585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4186020 ] 00:13:27.574 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.830 [2024-07-12 15:49:57.306828] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.830 [2024-07-12 15:49:57.412789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.830 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.830 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:27.830 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:28.393 Nvme0n1 00:13:28.393 15:49:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:28.650 [ 00:13:28.650 { 00:13:28.650 "name": "Nvme0n1", 00:13:28.650 "aliases": [ 00:13:28.650 "f9fcccab-6549-4c45-8b6b-4d5e2ac04750" 00:13:28.650 ], 00:13:28.650 "product_name": "NVMe disk", 00:13:28.650 "block_size": 4096, 00:13:28.650 "num_blocks": 38912, 00:13:28.650 "uuid": "f9fcccab-6549-4c45-8b6b-4d5e2ac04750", 00:13:28.650 "assigned_rate_limits": { 00:13:28.650 "rw_ios_per_sec": 0, 00:13:28.650 "rw_mbytes_per_sec": 0, 00:13:28.650 "r_mbytes_per_sec": 0, 00:13:28.650 "w_mbytes_per_sec": 0 00:13:28.650 }, 00:13:28.650 "claimed": false, 00:13:28.650 "zoned": false, 00:13:28.650 "supported_io_types": { 00:13:28.650 "read": true, 00:13:28.650 "write": true, 00:13:28.650 "unmap": true, 00:13:28.650 "flush": true, 00:13:28.650 "reset": true, 00:13:28.650 "nvme_admin": true, 00:13:28.650 "nvme_io": true, 00:13:28.650 "nvme_io_md": false, 00:13:28.650 "write_zeroes": true, 00:13:28.650 "zcopy": false, 00:13:28.650 "get_zone_info": false, 00:13:28.650 "zone_management": false, 00:13:28.650 "zone_append": false, 00:13:28.650 "compare": true, 00:13:28.650 "compare_and_write": true, 00:13:28.650 "abort": true, 00:13:28.650 "seek_hole": false, 00:13:28.650 "seek_data": false, 00:13:28.650 "copy": true, 00:13:28.650 "nvme_iov_md": false 00:13:28.650 }, 00:13:28.650 "memory_domains": [ 00:13:28.650 { 00:13:28.650 "dma_device_id": "system", 00:13:28.650 "dma_device_type": 1 00:13:28.650 } 00:13:28.650 ], 00:13:28.650 "driver_specific": { 00:13:28.650 "nvme": [ 00:13:28.650 { 00:13:28.650 "trid": { 00:13:28.650 "trtype": "TCP", 00:13:28.650 "adrfam": "IPv4", 00:13:28.650 "traddr": "10.0.0.2", 00:13:28.650 "trsvcid": "4420", 00:13:28.650 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:28.650 }, 00:13:28.650 "ctrlr_data": { 00:13:28.650 "cntlid": 1, 00:13:28.650 "vendor_id": "0x8086", 00:13:28.650 "model_number": "SPDK bdev Controller", 00:13:28.650 "serial_number": "SPDK0", 00:13:28.650 "firmware_revision": "24.09", 00:13:28.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:28.650 "oacs": { 00:13:28.650 "security": 0, 00:13:28.650 "format": 0, 00:13:28.650 "firmware": 0, 00:13:28.650 "ns_manage": 0 00:13:28.650 }, 00:13:28.650 "multi_ctrlr": true, 00:13:28.650 "ana_reporting": false 00:13:28.650 }, 00:13:28.650 "vs": { 00:13:28.650 "nvme_version": "1.3" 00:13:28.650 }, 00:13:28.650 "ns_data": { 00:13:28.650 "id": 1, 00:13:28.650 "can_share": true 00:13:28.650 } 00:13:28.650 } 00:13:28.650 ], 00:13:28.650 "mp_policy": "active_passive" 00:13:28.650 } 00:13:28.650 } 00:13:28.650 ] 00:13:28.650 15:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4186155 00:13:28.650 15:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:28.650 15:49:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:28.650 Running I/O for 10 seconds... 00:13:29.580 Latency(us) 00:13:29.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.580 Nvme0n1 : 1.00 14994.00 58.57 0.00 0.00 0.00 0.00 0.00 00:13:29.580 =================================================================================================================== 00:13:29.580 Total : 14994.00 58.57 0.00 0.00 0.00 0.00 0.00 00:13:29.580 00:13:30.511 15:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:30.768 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.768 Nvme0n1 : 2.00 15236.50 59.52 0.00 0.00 0.00 0.00 0.00 00:13:30.768 =================================================================================================================== 00:13:30.768 Total : 15236.50 59.52 0.00 0.00 0.00 0.00 0.00 00:13:30.768 00:13:30.768 true 00:13:30.768 15:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:30.768 15:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:31.024 15:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:31.024 15:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:31.024 15:50:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4186155 00:13:31.588 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.588 Nvme0n1 : 3.00 15361.33 60.01 0.00 0.00 0.00 0.00 0.00 00:13:31.588 =================================================================================================================== 00:13:31.588 Total : 15361.33 60.01 0.00 0.00 0.00 0.00 0.00 00:13:31.588 00:13:32.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.955 Nvme0n1 : 4.00 15441.25 60.32 0.00 0.00 0.00 0.00 0.00 00:13:32.955 =================================================================================================================== 00:13:32.955 Total : 15441.25 60.32 0.00 0.00 0.00 0.00 0.00 00:13:32.955 00:13:33.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.886 Nvme0n1 : 5.00 15541.80 60.71 0.00 0.00 0.00 0.00 0.00 00:13:33.886 =================================================================================================================== 00:13:33.886 Total : 15541.80 60.71 0.00 0.00 0.00 0.00 0.00 00:13:33.886 00:13:34.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.845 Nvme0n1 : 6.00 15623.50 61.03 0.00 0.00 0.00 0.00 0.00 00:13:34.845 =================================================================================================================== 00:13:34.845 Total : 15623.50 61.03 0.00 0.00 0.00 0.00 0.00 00:13:34.845 00:13:35.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.776 Nvme0n1 : 7.00 15674.14 61.23 0.00 0.00 0.00 0.00 0.00 00:13:35.776 =================================================================================================================== 00:13:35.776 Total : 15674.14 61.23 0.00 0.00 0.00 0.00 0.00 00:13:35.776 00:13:36.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.706 Nvme0n1 : 8.00 15719.75 61.41 0.00 0.00 0.00 0.00 0.00 00:13:36.706 =================================================================================================================== 00:13:36.706 Total : 15719.75 61.41 0.00 0.00 0.00 0.00 0.00 00:13:36.706 00:13:37.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.638 Nvme0n1 : 9.00 15761.67 61.57 0.00 0.00 0.00 0.00 0.00 00:13:37.638 =================================================================================================================== 00:13:37.638 Total : 15761.67 61.57 0.00 0.00 0.00 0.00 0.00 00:13:37.638 00:13:38.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.568 Nvme0n1 : 10.00 15791.30 61.68 0.00 0.00 0.00 0.00 0.00 00:13:38.568 =================================================================================================================== 00:13:38.568 Total : 15791.30 61.68 0.00 0.00 0.00 0.00 0.00 00:13:38.568 00:13:38.568 00:13:38.568 Latency(us) 00:13:38.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.568 Nvme0n1 : 10.00 15790.42 61.68 0.00 0.00 8101.00 4757.43 16893.72 00:13:38.568 =================================================================================================================== 00:13:38.568 Total : 15790.42 61.68 0.00 0.00 8101.00 4757.43 16893.72 00:13:38.568 0 00:13:38.568 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4186020 00:13:38.568 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4186020 ']' 00:13:38.568 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4186020 00:13:38.568 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:38.568 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:38.568 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4186020 00:13:38.825 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:38.825 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:38.825 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4186020' 00:13:38.825 killing process with pid 4186020 00:13:38.825 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4186020 00:13:38.825 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.825 00:13:38.825 Latency(us) 00:13:38.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.825 =================================================================================================================== 00:13:38.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.825 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4186020 00:13:39.082 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:39.339 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:39.595 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:39.595 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:39.851 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:39.851 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:39.851 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:40.108 [2024-07-12 15:50:09.602068] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:40.108 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:40.365 request: 00:13:40.365 { 00:13:40.365 "uuid": "7488ca6e-e31c-469a-9935-419e60963ebc", 00:13:40.365 "method": "bdev_lvol_get_lvstores", 00:13:40.365 "req_id": 1 00:13:40.365 } 00:13:40.365 Got JSON-RPC error response 00:13:40.365 response: 00:13:40.365 { 00:13:40.365 "code": -19, 00:13:40.365 "message": "No such device" 00:13:40.365 } 00:13:40.365 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:40.365 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:40.365 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:40.365 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:40.365 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:40.621 aio_bdev 00:13:40.621 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f9fcccab-6549-4c45-8b6b-4d5e2ac04750 00:13:40.621 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=f9fcccab-6549-4c45-8b6b-4d5e2ac04750 00:13:40.621 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:40.621 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:40.621 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:40.621 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:40.621 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:40.878 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f9fcccab-6549-4c45-8b6b-4d5e2ac04750 -t 2000 00:13:41.135 [ 00:13:41.135 { 00:13:41.135 "name": "f9fcccab-6549-4c45-8b6b-4d5e2ac04750", 00:13:41.135 "aliases": [ 00:13:41.135 "lvs/lvol" 00:13:41.135 ], 00:13:41.135 "product_name": "Logical Volume", 00:13:41.135 "block_size": 4096, 00:13:41.135 "num_blocks": 38912, 00:13:41.135 "uuid": "f9fcccab-6549-4c45-8b6b-4d5e2ac04750", 00:13:41.135 "assigned_rate_limits": { 00:13:41.135 "rw_ios_per_sec": 0, 00:13:41.135 "rw_mbytes_per_sec": 0, 00:13:41.135 "r_mbytes_per_sec": 0, 00:13:41.135 "w_mbytes_per_sec": 0 00:13:41.135 }, 00:13:41.135 "claimed": false, 00:13:41.135 "zoned": false, 00:13:41.135 "supported_io_types": { 00:13:41.135 "read": true, 00:13:41.135 "write": true, 00:13:41.135 "unmap": true, 00:13:41.135 "flush": false, 00:13:41.135 "reset": true, 00:13:41.135 "nvme_admin": false, 00:13:41.135 "nvme_io": false, 00:13:41.135 "nvme_io_md": false, 00:13:41.135 "write_zeroes": true, 00:13:41.135 "zcopy": false, 00:13:41.135 "get_zone_info": false, 00:13:41.135 "zone_management": false, 00:13:41.135 "zone_append": false, 00:13:41.135 "compare": false, 00:13:41.135 "compare_and_write": false, 00:13:41.135 "abort": false, 00:13:41.135 "seek_hole": true, 00:13:41.135 "seek_data": true, 00:13:41.135 "copy": false, 00:13:41.135 "nvme_iov_md": false 00:13:41.135 }, 00:13:41.135 "driver_specific": { 00:13:41.135 "lvol": { 00:13:41.135 "lvol_store_uuid": "7488ca6e-e31c-469a-9935-419e60963ebc", 00:13:41.135 "base_bdev": "aio_bdev", 00:13:41.135 "thin_provision": false, 00:13:41.135 "num_allocated_clusters": 38, 00:13:41.135 "snapshot": false, 00:13:41.135 "clone": false, 00:13:41.135 "esnap_clone": false 00:13:41.135 } 00:13:41.135 } 00:13:41.135 } 00:13:41.135 ] 00:13:41.135 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:41.135 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:41.135 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:41.392 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:41.392 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:41.392 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:41.649 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:41.649 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9fcccab-6549-4c45-8b6b-4d5e2ac04750 00:13:41.906 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7488ca6e-e31c-469a-9935-419e60963ebc 00:13:42.164 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:42.421 00:13:42.421 real 0m17.388s 00:13:42.421 user 0m16.867s 00:13:42.421 sys 0m1.913s 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:42.421 ************************************ 00:13:42.421 END TEST lvs_grow_clean 00:13:42.421 ************************************ 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.421 15:50:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:42.421 ************************************ 00:13:42.421 START TEST lvs_grow_dirty 00:13:42.421 ************************************ 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:42.421 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:42.678 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:42.678 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:42.950 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:42.950 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:42.950 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:43.207 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:43.207 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:43.207 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ff66188a-469e-45c5-8e02-9ed36cd9bebc lvol 150 00:13:43.464 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6e46ead-3870-45df-9cc6-2152fbebe497 00:13:43.464 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:43.464 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:43.720 [2024-07-12 15:50:13.304448] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:43.720 [2024-07-12 15:50:13.304538] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:43.720 true 00:13:43.720 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:43.720 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:43.977 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:43.977 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:44.235 15:50:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6e46ead-3870-45df-9cc6-2152fbebe497 00:13:44.493 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:44.751 [2024-07-12 15:50:14.339593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.751 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4188189 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4188189 /var/tmp/bdevperf.sock 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4188189 ']' 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:45.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.009 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:45.009 [2024-07-12 15:50:14.632958] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:13:45.009 [2024-07-12 15:50:14.633043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4188189 ] 00:13:45.009 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.009 [2024-07-12 15:50:14.689547] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.267 [2024-07-12 15:50:14.795999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.267 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.267 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:45.267 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:45.832 Nvme0n1 00:13:45.832 15:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:45.832 [ 00:13:45.832 { 00:13:45.832 "name": "Nvme0n1", 00:13:45.832 "aliases": [ 00:13:45.832 "a6e46ead-3870-45df-9cc6-2152fbebe497" 00:13:45.832 ], 00:13:45.832 "product_name": "NVMe disk", 00:13:45.832 "block_size": 4096, 00:13:45.832 "num_blocks": 38912, 00:13:45.832 "uuid": "a6e46ead-3870-45df-9cc6-2152fbebe497", 00:13:45.832 "assigned_rate_limits": { 00:13:45.832 "rw_ios_per_sec": 0, 00:13:45.832 "rw_mbytes_per_sec": 0, 00:13:45.832 "r_mbytes_per_sec": 0, 00:13:45.832 "w_mbytes_per_sec": 0 00:13:45.832 }, 00:13:45.832 "claimed": false, 00:13:45.832 "zoned": false, 00:13:45.832 "supported_io_types": { 00:13:45.832 "read": true, 00:13:45.832 "write": true, 00:13:45.832 "unmap": true, 00:13:45.832 "flush": true, 00:13:45.832 "reset": true, 00:13:45.832 "nvme_admin": true, 00:13:45.832 "nvme_io": true, 00:13:45.832 "nvme_io_md": false, 00:13:45.832 "write_zeroes": true, 00:13:45.832 "zcopy": false, 00:13:45.832 "get_zone_info": false, 00:13:45.832 "zone_management": false, 00:13:45.832 "zone_append": false, 00:13:45.832 "compare": true, 00:13:45.832 "compare_and_write": true, 00:13:45.832 "abort": true, 00:13:45.832 "seek_hole": false, 00:13:45.832 "seek_data": false, 00:13:45.832 "copy": true, 00:13:45.832 "nvme_iov_md": false 00:13:45.832 }, 00:13:45.832 "memory_domains": [ 00:13:45.832 { 00:13:45.832 "dma_device_id": "system", 00:13:45.832 "dma_device_type": 1 00:13:45.832 } 00:13:45.832 ], 00:13:45.832 "driver_specific": { 00:13:45.832 "nvme": [ 00:13:45.832 { 00:13:45.832 "trid": { 00:13:45.832 "trtype": "TCP", 00:13:45.832 "adrfam": "IPv4", 00:13:45.832 "traddr": "10.0.0.2", 00:13:45.832 "trsvcid": "4420", 00:13:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:45.832 }, 00:13:45.832 "ctrlr_data": { 00:13:45.832 "cntlid": 1, 00:13:45.832 "vendor_id": "0x8086", 00:13:45.832 "model_number": "SPDK bdev Controller", 00:13:45.832 "serial_number": "SPDK0", 00:13:45.832 "firmware_revision": "24.09", 00:13:45.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:45.832 "oacs": { 00:13:45.832 "security": 0, 00:13:45.832 "format": 0, 00:13:45.832 "firmware": 0, 00:13:45.832 "ns_manage": 0 00:13:45.832 }, 00:13:45.832 "multi_ctrlr": true, 00:13:45.832 "ana_reporting": false 00:13:45.832 }, 00:13:45.832 "vs": { 00:13:45.832 "nvme_version": "1.3" 00:13:45.832 }, 00:13:45.832 "ns_data": { 00:13:45.832 "id": 1, 00:13:45.832 "can_share": true 00:13:45.832 } 00:13:45.832 } 00:13:45.832 ], 00:13:45.832 "mp_policy": "active_passive" 00:13:45.832 } 00:13:45.832 } 00:13:45.832 ] 00:13:45.832 15:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4188208 00:13:45.832 15:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:45.832 15:50:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:46.090 Running I/O for 10 seconds... 00:13:47.024 Latency(us) 00:13:47.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.024 Nvme0n1 : 1.00 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:13:47.024 =================================================================================================================== 00:13:47.024 Total : 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:13:47.024 00:13:47.958 15:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:47.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.958 Nvme0n1 : 2.00 15524.00 60.64 0.00 0.00 0.00 0.00 0.00 00:13:47.958 =================================================================================================================== 00:13:47.958 Total : 15524.00 60.64 0.00 0.00 0.00 0.00 0.00 00:13:47.958 00:13:48.216 true 00:13:48.216 15:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:48.216 15:50:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:48.474 15:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:48.474 15:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:48.474 15:50:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4188208 00:13:49.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:49.066 Nvme0n1 : 3.00 15597.33 60.93 0.00 0.00 0.00 0.00 0.00 00:13:49.066 =================================================================================================================== 00:13:49.066 Total : 15597.33 60.93 0.00 0.00 0.00 0.00 0.00 00:13:49.066 00:13:50.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.000 Nvme0n1 : 4.00 15666.00 61.20 0.00 0.00 0.00 0.00 0.00 00:13:50.000 =================================================================================================================== 00:13:50.000 Total : 15666.00 61.20 0.00 0.00 0.00 0.00 0.00 00:13:50.000 00:13:50.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.933 Nvme0n1 : 5.00 15694.20 61.31 0.00 0.00 0.00 0.00 0.00 00:13:50.933 =================================================================================================================== 00:13:50.933 Total : 15694.20 61.31 0.00 0.00 0.00 0.00 0.00 00:13:50.933 00:13:52.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.309 Nvme0n1 : 6.00 15723.83 61.42 0.00 0.00 0.00 0.00 0.00 00:13:52.309 =================================================================================================================== 00:13:52.309 Total : 15723.83 61.42 0.00 0.00 0.00 0.00 0.00 00:13:52.309 00:13:53.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.243 Nvme0n1 : 7.00 15756.43 61.55 0.00 0.00 0.00 0.00 0.00 00:13:53.243 =================================================================================================================== 00:13:53.243 Total : 15756.43 61.55 0.00 0.00 0.00 0.00 0.00 00:13:53.243 00:13:54.180 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:54.180 Nvme0n1 : 8.00 15792.75 61.69 0.00 0.00 0.00 0.00 0.00 00:13:54.180 =================================================================================================================== 00:13:54.180 Total : 15792.75 61.69 0.00 0.00 0.00 0.00 0.00 00:13:54.180 00:13:55.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.113 Nvme0n1 : 9.00 15808.78 61.75 0.00 0.00 0.00 0.00 0.00 00:13:55.113 =================================================================================================================== 00:13:55.113 Total : 15808.78 61.75 0.00 0.00 0.00 0.00 0.00 00:13:55.113 00:13:56.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.045 Nvme0n1 : 10.00 15821.40 61.80 0.00 0.00 0.00 0.00 0.00 00:13:56.045 =================================================================================================================== 00:13:56.045 Total : 15821.40 61.80 0.00 0.00 0.00 0.00 0.00 00:13:56.045 00:13:56.045 00:13:56.045 Latency(us) 00:13:56.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.045 Nvme0n1 : 10.01 15824.63 61.81 0.00 0.00 8082.48 3737.98 14563.56 00:13:56.045 =================================================================================================================== 00:13:56.045 Total : 15824.63 61.81 0.00 0.00 8082.48 3737.98 14563.56 00:13:56.045 0 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4188189 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4188189 ']' 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4188189 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4188189 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4188189' 00:13:56.045 killing process with pid 4188189 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4188189 00:13:56.045 Received shutdown signal, test time was about 10.000000 seconds 00:13:56.045 00:13:56.045 Latency(us) 00:13:56.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.045 =================================================================================================================== 00:13:56.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.045 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4188189 00:13:56.302 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:56.559 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4185588 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4185588 00:13:57.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4185588 Killed "${NVMF_APP[@]}" "$@" 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4189540 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4189540 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4189540 ']' 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.123 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:57.381 [2024-07-12 15:50:26.872507] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:13:57.381 [2024-07-12 15:50:26.872598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.381 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.381 [2024-07-12 15:50:26.935251] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.381 [2024-07-12 15:50:27.043022] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.381 [2024-07-12 15:50:27.043074] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.381 [2024-07-12 15:50:27.043103] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.381 [2024-07-12 15:50:27.043115] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.381 [2024-07-12 15:50:27.043125] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.381 [2024-07-12 15:50:27.043167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.638 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.638 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:57.638 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.638 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.638 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:57.638 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.638 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:57.895 [2024-07-12 15:50:27.402697] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:57.896 [2024-07-12 15:50:27.402849] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:57.896 [2024-07-12 15:50:27.402897] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a6e46ead-3870-45df-9cc6-2152fbebe497 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a6e46ead-3870-45df-9cc6-2152fbebe497 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:57.896 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:58.152 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6e46ead-3870-45df-9cc6-2152fbebe497 -t 2000 00:13:58.409 [ 00:13:58.409 { 00:13:58.409 "name": "a6e46ead-3870-45df-9cc6-2152fbebe497", 00:13:58.409 "aliases": [ 00:13:58.409 "lvs/lvol" 00:13:58.409 ], 00:13:58.409 "product_name": "Logical Volume", 00:13:58.409 "block_size": 4096, 00:13:58.409 "num_blocks": 38912, 00:13:58.409 "uuid": "a6e46ead-3870-45df-9cc6-2152fbebe497", 00:13:58.409 "assigned_rate_limits": { 00:13:58.409 "rw_ios_per_sec": 0, 00:13:58.409 "rw_mbytes_per_sec": 0, 00:13:58.409 "r_mbytes_per_sec": 0, 00:13:58.409 "w_mbytes_per_sec": 0 00:13:58.409 }, 00:13:58.409 "claimed": false, 00:13:58.410 "zoned": false, 00:13:58.410 "supported_io_types": { 00:13:58.410 "read": true, 00:13:58.410 "write": true, 00:13:58.410 "unmap": true, 00:13:58.410 "flush": false, 00:13:58.410 "reset": true, 00:13:58.410 "nvme_admin": false, 00:13:58.410 "nvme_io": false, 00:13:58.410 "nvme_io_md": false, 00:13:58.410 "write_zeroes": true, 00:13:58.410 "zcopy": false, 00:13:58.410 "get_zone_info": false, 00:13:58.410 "zone_management": false, 00:13:58.410 "zone_append": false, 00:13:58.410 "compare": false, 00:13:58.410 "compare_and_write": false, 00:13:58.410 "abort": false, 00:13:58.410 "seek_hole": true, 00:13:58.410 "seek_data": true, 00:13:58.410 "copy": false, 00:13:58.410 "nvme_iov_md": false 00:13:58.410 }, 00:13:58.410 "driver_specific": { 00:13:58.410 "lvol": { 00:13:58.410 "lvol_store_uuid": "ff66188a-469e-45c5-8e02-9ed36cd9bebc", 00:13:58.410 "base_bdev": "aio_bdev", 00:13:58.410 "thin_provision": false, 00:13:58.410 "num_allocated_clusters": 38, 00:13:58.410 "snapshot": false, 00:13:58.410 "clone": false, 00:13:58.410 "esnap_clone": false 00:13:58.410 } 00:13:58.410 } 00:13:58.410 } 00:13:58.410 ] 00:13:58.410 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:58.410 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:58.410 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:58.667 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:58.667 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:58.667 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:58.667 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:58.667 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:58.924 [2024-07-12 15:50:28.619679] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:59.182 request: 00:13:59.182 { 00:13:59.182 "uuid": "ff66188a-469e-45c5-8e02-9ed36cd9bebc", 00:13:59.182 "method": "bdev_lvol_get_lvstores", 00:13:59.182 "req_id": 1 00:13:59.182 } 00:13:59.182 Got JSON-RPC error response 00:13:59.182 response: 00:13:59.182 { 00:13:59.182 "code": -19, 00:13:59.182 "message": "No such device" 00:13:59.182 } 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:59.182 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:59.440 aio_bdev 00:13:59.440 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6e46ead-3870-45df-9cc6-2152fbebe497 00:13:59.440 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=a6e46ead-3870-45df-9cc6-2152fbebe497 00:13:59.440 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.440 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:59.440 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.440 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.440 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:59.698 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6e46ead-3870-45df-9cc6-2152fbebe497 -t 2000 00:13:59.955 [ 00:13:59.955 { 00:13:59.955 "name": "a6e46ead-3870-45df-9cc6-2152fbebe497", 00:13:59.955 "aliases": [ 00:13:59.955 "lvs/lvol" 00:13:59.955 ], 00:13:59.955 "product_name": "Logical Volume", 00:13:59.955 "block_size": 4096, 00:13:59.955 "num_blocks": 38912, 00:13:59.955 "uuid": "a6e46ead-3870-45df-9cc6-2152fbebe497", 00:13:59.955 "assigned_rate_limits": { 00:13:59.955 "rw_ios_per_sec": 0, 00:13:59.955 "rw_mbytes_per_sec": 0, 00:13:59.955 "r_mbytes_per_sec": 0, 00:13:59.955 "w_mbytes_per_sec": 0 00:13:59.955 }, 00:13:59.955 "claimed": false, 00:13:59.955 "zoned": false, 00:13:59.955 "supported_io_types": { 00:13:59.955 "read": true, 00:13:59.955 "write": true, 00:13:59.955 "unmap": true, 00:13:59.955 "flush": false, 00:13:59.955 "reset": true, 00:13:59.955 "nvme_admin": false, 00:13:59.955 "nvme_io": false, 00:13:59.955 "nvme_io_md": false, 00:13:59.955 "write_zeroes": true, 00:13:59.955 "zcopy": false, 00:13:59.955 "get_zone_info": false, 00:13:59.955 "zone_management": false, 00:13:59.955 "zone_append": false, 00:13:59.955 "compare": false, 00:13:59.955 "compare_and_write": false, 00:13:59.956 "abort": false, 00:13:59.956 "seek_hole": true, 00:13:59.956 "seek_data": true, 00:13:59.956 "copy": false, 00:13:59.956 "nvme_iov_md": false 00:13:59.956 }, 00:13:59.956 "driver_specific": { 00:13:59.956 "lvol": { 00:13:59.956 "lvol_store_uuid": "ff66188a-469e-45c5-8e02-9ed36cd9bebc", 00:13:59.956 "base_bdev": "aio_bdev", 00:13:59.956 "thin_provision": false, 00:13:59.956 "num_allocated_clusters": 38, 00:13:59.956 "snapshot": false, 00:13:59.956 "clone": false, 00:13:59.956 "esnap_clone": false 00:13:59.956 } 00:13:59.956 } 00:13:59.956 } 00:13:59.956 ] 00:13:59.956 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:59.956 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:13:59.956 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:00.214 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:00.214 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:14:00.214 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:00.472 15:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:00.472 15:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6e46ead-3870-45df-9cc6-2152fbebe497 00:14:00.728 15:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff66188a-469e-45c5-8e02-9ed36cd9bebc 00:14:00.985 15:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.241 00:14:01.241 real 0m18.920s 00:14:01.241 user 0m47.054s 00:14:01.241 sys 0m5.145s 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:01.241 ************************************ 00:14:01.241 END TEST lvs_grow_dirty 00:14:01.241 ************************************ 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:01.241 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:01.498 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:01.498 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:01.498 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:01.498 15:50:30 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:01.498 nvmf_trace.0 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.498 rmmod nvme_tcp 00:14:01.498 rmmod nvme_fabrics 00:14:01.498 rmmod nvme_keyring 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4189540 ']' 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4189540 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4189540 ']' 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4189540 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4189540 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4189540' 00:14:01.498 killing process with pid 4189540 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4189540 00:14:01.498 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4189540 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.756 15:50:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.287 15:50:33 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:04.287 00:14:04.287 real 0m41.817s 00:14:04.287 user 1m9.554s 00:14:04.287 sys 0m9.067s 00:14:04.287 15:50:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:04.287 15:50:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:04.287 ************************************ 00:14:04.287 END TEST nvmf_lvs_grow 00:14:04.287 ************************************ 00:14:04.287 15:50:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:04.287 15:50:33 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:04.287 15:50:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:04.287 15:50:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.287 15:50:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:04.287 ************************************ 00:14:04.287 START TEST nvmf_bdev_io_wait 00:14:04.287 ************************************ 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:04.287 * Looking for test storage... 00:14:04.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:04.287 15:50:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:06.222 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:06.222 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:06.222 Found net devices under 0000:09:00.0: cvl_0_0 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:06.222 Found net devices under 0000:09:00.1: cvl_0_1 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:06.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:14:06.222 00:14:06.222 --- 10.0.0.2 ping statistics --- 00:14:06.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.222 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:14:06.222 00:14:06.222 --- 10.0.0.1 ping statistics --- 00:14:06.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.222 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:06.222 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4192064 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4192064 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4192064 ']' 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.223 15:50:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.223 [2024-07-12 15:50:35.883330] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:06.223 [2024-07-12 15:50:35.883395] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.223 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.223 [2024-07-12 15:50:35.942851] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.480 [2024-07-12 15:50:36.047627] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.480 [2024-07-12 15:50:36.047689] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.480 [2024-07-12 15:50:36.047702] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.480 [2024-07-12 15:50:36.047713] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.480 [2024-07-12 15:50:36.047736] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.480 [2024-07-12 15:50:36.047840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.480 [2024-07-12 15:50:36.048281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.481 [2024-07-12 15:50:36.048371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.481 [2024-07-12 15:50:36.048375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.481 [2024-07-12 15:50:36.181785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.481 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.738 Malloc0 00:14:06.738 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.738 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.738 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.738 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.738 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:06.739 [2024-07-12 15:50:36.250793] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4192208 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4192210 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.739 { 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme$subsystem", 00:14:06.739 "trtype": "$TEST_TRANSPORT", 00:14:06.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "$NVMF_PORT", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.739 "hdgst": ${hdgst:-false}, 00:14:06.739 "ddgst": ${ddgst:-false} 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 } 00:14:06.739 EOF 00:14:06.739 )") 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4192212 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.739 { 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme$subsystem", 00:14:06.739 "trtype": "$TEST_TRANSPORT", 00:14:06.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "$NVMF_PORT", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.739 "hdgst": ${hdgst:-false}, 00:14:06.739 "ddgst": ${ddgst:-false} 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 } 00:14:06.739 EOF 00:14:06.739 )") 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4192215 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.739 { 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme$subsystem", 00:14:06.739 "trtype": "$TEST_TRANSPORT", 00:14:06.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "$NVMF_PORT", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.739 "hdgst": ${hdgst:-false}, 00:14:06.739 "ddgst": ${ddgst:-false} 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 } 00:14:06.739 EOF 00:14:06.739 )") 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:06.739 { 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme$subsystem", 00:14:06.739 "trtype": "$TEST_TRANSPORT", 00:14:06.739 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "$NVMF_PORT", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:06.739 "hdgst": ${hdgst:-false}, 00:14:06.739 "ddgst": ${ddgst:-false} 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 } 00:14:06.739 EOF 00:14:06.739 )") 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4192208 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme1", 00:14:06.739 "trtype": "tcp", 00:14:06.739 "traddr": "10.0.0.2", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "4420", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.739 "hdgst": false, 00:14:06.739 "ddgst": false 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 }' 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme1", 00:14:06.739 "trtype": "tcp", 00:14:06.739 "traddr": "10.0.0.2", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "4420", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.739 "hdgst": false, 00:14:06.739 "ddgst": false 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 }' 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme1", 00:14:06.739 "trtype": "tcp", 00:14:06.739 "traddr": "10.0.0.2", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "4420", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.739 "hdgst": false, 00:14:06.739 "ddgst": false 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 }' 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:06.739 15:50:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:06.739 "params": { 00:14:06.739 "name": "Nvme1", 00:14:06.739 "trtype": "tcp", 00:14:06.739 "traddr": "10.0.0.2", 00:14:06.739 "adrfam": "ipv4", 00:14:06.739 "trsvcid": "4420", 00:14:06.739 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.739 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:06.739 "hdgst": false, 00:14:06.739 "ddgst": false 00:14:06.739 }, 00:14:06.739 "method": "bdev_nvme_attach_controller" 00:14:06.739 }' 00:14:06.739 [2024-07-12 15:50:36.298290] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:06.739 [2024-07-12 15:50:36.298290] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:06.739 [2024-07-12 15:50:36.298290] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:06.739 [2024-07-12 15:50:36.298290] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:06.739 [2024-07-12 15:50:36.298396] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 15:50:36.298397] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 15:50:36.298397] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 15:50:36.298398] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:06.739 --proc-type=auto ] 00:14:06.739 --proc-type=auto ] 00:14:06.739 --proc-type=auto ] 00:14:06.739 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.740 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.997 [2024-07-12 15:50:36.470019] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.997 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.997 [2024-07-12 15:50:36.572357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:06.997 [2024-07-12 15:50:36.574382] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.997 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.997 [2024-07-12 15:50:36.676212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:06.997 [2024-07-12 15:50:36.677905] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.254 [2024-07-12 15:50:36.779226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:07.255 [2024-07-12 15:50:36.782116] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.255 [2024-07-12 15:50:36.883516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:07.255 Running I/O for 1 seconds... 00:14:07.512 Running I/O for 1 seconds... 00:14:07.512 Running I/O for 1 seconds... 00:14:07.512 Running I/O for 1 seconds... 00:14:08.445 00:14:08.445 Latency(us) 00:14:08.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.445 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:08.445 Nvme1n1 : 1.01 9476.53 37.02 0.00 0.00 13439.94 8932.31 19418.07 00:14:08.445 =================================================================================================================== 00:14:08.445 Total : 9476.53 37.02 0.00 0.00 13439.94 8932.31 19418.07 00:14:08.445 00:14:08.445 Latency(us) 00:14:08.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.445 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:08.445 Nvme1n1 : 1.00 193033.65 754.04 0.00 0.00 660.46 268.52 885.95 00:14:08.445 =================================================================================================================== 00:14:08.445 Total : 193033.65 754.04 0.00 0.00 660.46 268.52 885.95 00:14:08.445 00:14:08.445 Latency(us) 00:14:08.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.445 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:08.445 Nvme1n1 : 1.01 10062.40 39.31 0.00 0.00 12670.29 6844.87 25049.32 00:14:08.445 =================================================================================================================== 00:14:08.445 Total : 10062.40 39.31 0.00 0.00 12670.29 6844.87 25049.32 00:14:08.445 00:14:08.445 Latency(us) 00:14:08.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.445 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:08.445 Nvme1n1 : 1.01 7789.46 30.43 0.00 0.00 16363.28 5437.06 27379.48 00:14:08.445 =================================================================================================================== 00:14:08.445 Total : 7789.46 30.43 0.00 0.00 16363.28 5437.06 27379.48 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4192210 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4192212 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4192215 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.010 rmmod nvme_tcp 00:14:09.010 rmmod nvme_fabrics 00:14:09.010 rmmod nvme_keyring 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4192064 ']' 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4192064 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4192064 ']' 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4192064 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4192064 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4192064' 00:14:09.010 killing process with pid 4192064 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4192064 00:14:09.010 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4192064 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.268 15:50:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.171 15:50:40 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.171 00:14:11.171 real 0m7.396s 00:14:11.171 user 0m16.814s 00:14:11.171 sys 0m3.768s 00:14:11.171 15:50:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.171 15:50:40 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:11.171 ************************************ 00:14:11.171 END TEST nvmf_bdev_io_wait 00:14:11.171 ************************************ 00:14:11.171 15:50:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:11.171 15:50:40 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:11.171 15:50:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.171 15:50:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.171 15:50:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.171 ************************************ 00:14:11.171 START TEST nvmf_queue_depth 00:14:11.171 ************************************ 00:14:11.171 15:50:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:11.429 * Looking for test storage... 00:14:11.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.430 15:50:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:13.335 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.335 15:50:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:13.335 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:13.335 Found net devices under 0000:09:00.0: cvl_0_0 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:13.335 Found net devices under 0000:09:00.1: cvl_0_1 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.335 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.336 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:13.593 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.593 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:14:13.593 00:14:13.593 --- 10.0.0.2 ping statistics --- 00:14:13.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.593 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.593 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.593 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:14:13.593 00:14:13.593 --- 10.0.0.1 ping statistics --- 00:14:13.593 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.593 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=478 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 478 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 478 ']' 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.593 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.593 [2024-07-12 15:50:43.225829] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:13.593 [2024-07-12 15:50:43.225915] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.593 EAL: No free 2048 kB hugepages reported on node 1 00:14:13.593 [2024-07-12 15:50:43.288389] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.852 [2024-07-12 15:50:43.389814] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.852 [2024-07-12 15:50:43.389865] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.852 [2024-07-12 15:50:43.389894] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.852 [2024-07-12 15:50:43.389905] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.852 [2024-07-12 15:50:43.389914] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.852 [2024-07-12 15:50:43.389939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.852 [2024-07-12 15:50:43.530864] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.852 Malloc0 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.852 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:14.110 [2024-07-12 15:50:43.586540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=503 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 503 /var/tmp/bdevperf.sock 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 503 ']' 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.110 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:14.110 [2024-07-12 15:50:43.629789] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:14.110 [2024-07-12 15:50:43.629867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid503 ] 00:14:14.110 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.110 [2024-07-12 15:50:43.686058] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.110 [2024-07-12 15:50:43.792326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.367 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.367 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:14.367 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:14.367 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.367 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:14.367 NVMe0n1 00:14:14.367 15:50:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.367 15:50:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:14.367 Running I/O for 10 seconds... 00:14:26.559 00:14:26.559 Latency(us) 00:14:26.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.559 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:26.559 Verification LBA range: start 0x0 length 0x4000 00:14:26.559 NVMe0n1 : 10.10 8909.32 34.80 0.00 0.00 114484.89 22719.15 68739.98 00:14:26.559 =================================================================================================================== 00:14:26.559 Total : 8909.32 34.80 0.00 0.00 114484.89 22719.15 68739.98 00:14:26.559 0 00:14:26.559 15:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 503 00:14:26.559 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 503 ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 503 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 503 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 503' 00:14:26.560 killing process with pid 503 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 503 00:14:26.560 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.560 00:14:26.560 Latency(us) 00:14:26.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.560 =================================================================================================================== 00:14:26.560 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 503 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.560 rmmod nvme_tcp 00:14:26.560 rmmod nvme_fabrics 00:14:26.560 rmmod nvme_keyring 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 478 ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 478 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 478 ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 478 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 478 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 478' 00:14:26.560 killing process with pid 478 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 478 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 478 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.560 15:50:54 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.496 15:50:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.496 00:14:27.496 real 0m16.031s 00:14:27.496 user 0m22.481s 00:14:27.496 sys 0m3.089s 00:14:27.496 15:50:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.496 15:50:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:27.496 ************************************ 00:14:27.496 END TEST nvmf_queue_depth 00:14:27.496 ************************************ 00:14:27.496 15:50:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.496 15:50:56 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:27.496 15:50:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:27.496 15:50:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.496 15:50:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.496 ************************************ 00:14:27.496 START TEST nvmf_target_multipath 00:14:27.496 ************************************ 00:14:27.496 15:50:56 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:27.496 * Looking for test storage... 00:14:27.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.496 15:50:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.497 15:50:57 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.399 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:29.400 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:29.400 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:29.400 Found net devices under 0000:09:00.0: cvl_0_0 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:29.400 Found net devices under 0000:09:00.1: cvl_0_1 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.400 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:14:29.660 00:14:29.660 --- 10.0.0.2 ping statistics --- 00:14:29.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.660 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:14:29.660 00:14:29.660 --- 10.0.0.1 ping statistics --- 00:14:29.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.660 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:29.660 only one NIC for nvmf test 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.660 rmmod nvme_tcp 00:14:29.660 rmmod nvme_fabrics 00:14:29.660 rmmod nvme_keyring 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.660 15:50:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.603 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.861 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.861 15:51:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.861 15:51:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.861 15:51:01 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.861 00:14:31.861 real 0m4.360s 00:14:31.861 user 0m0.793s 00:14:31.861 sys 0m1.561s 00:14:31.861 15:51:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.861 15:51:01 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:31.861 ************************************ 00:14:31.861 END TEST nvmf_target_multipath 00:14:31.861 ************************************ 00:14:31.861 15:51:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:31.861 15:51:01 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:31.861 15:51:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.861 15:51:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.861 15:51:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:31.861 ************************************ 00:14:31.861 START TEST nvmf_zcopy 00:14:31.861 ************************************ 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:31.861 * Looking for test storage... 00:14:31.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:31.861 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:31.862 15:51:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:33.762 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:33.763 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:33.763 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:33.763 Found net devices under 0000:09:00.0: cvl_0_0 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:33.763 Found net devices under 0000:09:00.1: cvl_0_1 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.763 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:34.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:34.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:14:34.020 00:14:34.020 --- 10.0.0.2 ping statistics --- 00:14:34.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.020 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:34.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:34.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:14:34.020 00:14:34.020 --- 10.0.0.1 ping statistics --- 00:14:34.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:34.020 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=5946 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 5946 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 5946 ']' 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.020 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.020 [2024-07-12 15:51:03.631825] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:34.020 [2024-07-12 15:51:03.631916] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.020 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.020 [2024-07-12 15:51:03.694482] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.277 [2024-07-12 15:51:03.799100] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.277 [2024-07-12 15:51:03.799160] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.277 [2024-07-12 15:51:03.799188] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.277 [2024-07-12 15:51:03.799198] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.277 [2024-07-12 15:51:03.799208] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.277 [2024-07-12 15:51:03.799240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.277 [2024-07-12 15:51:03.927206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.277 [2024-07-12 15:51:03.943444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.277 malloc0 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:34.277 { 00:14:34.277 "params": { 00:14:34.277 "name": "Nvme$subsystem", 00:14:34.277 "trtype": "$TEST_TRANSPORT", 00:14:34.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:34.277 "adrfam": "ipv4", 00:14:34.277 "trsvcid": "$NVMF_PORT", 00:14:34.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:34.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:34.277 "hdgst": ${hdgst:-false}, 00:14:34.277 "ddgst": ${ddgst:-false} 00:14:34.277 }, 00:14:34.277 "method": "bdev_nvme_attach_controller" 00:14:34.277 } 00:14:34.277 EOF 00:14:34.277 )") 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:34.277 15:51:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:34.277 "params": { 00:14:34.277 "name": "Nvme1", 00:14:34.277 "trtype": "tcp", 00:14:34.277 "traddr": "10.0.0.2", 00:14:34.277 "adrfam": "ipv4", 00:14:34.277 "trsvcid": "4420", 00:14:34.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.277 "hdgst": false, 00:14:34.277 "ddgst": false 00:14:34.277 }, 00:14:34.277 "method": "bdev_nvme_attach_controller" 00:14:34.277 }' 00:14:34.534 [2024-07-12 15:51:04.029257] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:34.534 [2024-07-12 15:51:04.029372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid6028 ] 00:14:34.534 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.534 [2024-07-12 15:51:04.094570] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.534 [2024-07-12 15:51:04.200530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.792 Running I/O for 10 seconds... 00:14:44.748 00:14:44.748 Latency(us) 00:14:44.748 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.748 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:44.748 Verification LBA range: start 0x0 length 0x1000 00:14:44.748 Nvme1n1 : 10.01 5837.35 45.60 0.00 0.00 21869.17 3470.98 32816.55 00:14:44.748 =================================================================================================================== 00:14:44.748 Total : 5837.35 45.60 0.00 0.00 21869.17 3470.98 32816.55 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=7741 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:45.005 { 00:14:45.005 "params": { 00:14:45.005 "name": "Nvme$subsystem", 00:14:45.005 "trtype": "$TEST_TRANSPORT", 00:14:45.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:45.005 "adrfam": "ipv4", 00:14:45.005 "trsvcid": "$NVMF_PORT", 00:14:45.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:45.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:45.005 "hdgst": ${hdgst:-false}, 00:14:45.005 "ddgst": ${ddgst:-false} 00:14:45.005 }, 00:14:45.005 "method": "bdev_nvme_attach_controller" 00:14:45.005 } 00:14:45.005 EOF 00:14:45.005 )") 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:45.005 [2024-07-12 15:51:14.731859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.005 [2024-07-12 15:51:14.731905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.005 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:45.264 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:45.264 15:51:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:45.264 "params": { 00:14:45.264 "name": "Nvme1", 00:14:45.264 "trtype": "tcp", 00:14:45.264 "traddr": "10.0.0.2", 00:14:45.264 "adrfam": "ipv4", 00:14:45.264 "trsvcid": "4420", 00:14:45.264 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.264 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.264 "hdgst": false, 00:14:45.264 "ddgst": false 00:14:45.264 }, 00:14:45.264 "method": "bdev_nvme_attach_controller" 00:14:45.264 }' 00:14:45.264 [2024-07-12 15:51:14.739796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.739819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.747817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.747838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.755838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.755858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.763860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.763881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.768518] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:14:45.264 [2024-07-12 15:51:14.768579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid7741 ] 00:14:45.264 [2024-07-12 15:51:14.771879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.771899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.779902] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.779922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.787926] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.787946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.795949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.795969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.264 [2024-07-12 15:51:14.803970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.803990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.811992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.812011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.820012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.820032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.828036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.828056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.828741] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.264 [2024-07-12 15:51:14.836088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.836118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.844111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.844146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.852100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.852121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.860124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.860145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.868142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.868161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.876162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.876182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.884185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.884204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.892239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.892284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.900246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.900274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.908268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.908289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.916269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.916290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.924295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.924340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.932343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.932365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.940385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.940407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.942840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.264 [2024-07-12 15:51:14.948386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.948408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.956414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.956440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.964473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.964509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.972474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.972510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.980498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.980536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.264 [2024-07-12 15:51:14.988539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.264 [2024-07-12 15:51:14.988578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:14.996542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:14.996578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.004564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.004624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.012550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.012583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.020634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.020685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.028646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.028697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.036630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.036651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.044656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.044676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.052687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.052707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.060705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.522 [2024-07-12 15:51:15.060729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.522 [2024-07-12 15:51:15.068705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.068727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.076727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.076750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.084750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.084772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.092792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.092829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.100796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.100817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.108815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.108835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.116861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.116885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.124863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.124884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 Running I/O for 5 seconds... 00:14:45.523 [2024-07-12 15:51:15.132886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.132906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.147014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.147043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.157735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.157763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.168665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.168698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.181476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.181505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.191768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.191796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.202509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.202538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.215200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.215228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.224875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.224902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.235608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.235636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.523 [2024-07-12 15:51:15.246215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.523 [2024-07-12 15:51:15.246243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.256941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.256969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.267712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.267740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.280142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.280169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.290018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.290045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.300531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.300558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.310944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.310971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.321628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.321656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.332142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.780 [2024-07-12 15:51:15.332169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.780 [2024-07-12 15:51:15.345142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.345169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.355224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.355251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.365601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.365628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.378003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.378030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.387771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.387798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.398081] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.398108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.408354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.408381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.418198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.418225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.428174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.428202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.438690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.438717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.451089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.451116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.460757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.460784] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.470682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.470709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.480962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.480989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.491311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.491345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.781 [2024-07-12 15:51:15.501561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.781 [2024-07-12 15:51:15.501589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.511408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.511435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.521650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.521678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.531626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.531652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.541835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.541861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.552210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.552237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.562608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.562634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.572905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.572932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.583018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.583046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.593538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.593565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.603705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.603732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.614009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.614036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.624493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.624520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.637171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.637199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.647624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.647651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.657838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.657865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.668294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.668328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.678994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.679021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.689239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.689266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.699633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.699660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.709948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.709976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.720594] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.720622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.731442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.731469] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.742596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.742623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.753088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.753116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.039 [2024-07-12 15:51:15.765915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.039 [2024-07-12 15:51:15.765943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.775769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.775796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.786076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.786104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.796524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.796552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.806998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.807025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.817360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.817387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.827893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.827920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.838383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.838410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.848456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.848482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.859138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.859165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.871295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.871332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.880599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.880626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.893509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.893537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.903702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.903729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.914068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.914095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.924595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.924622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.935301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.935338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.945684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.945712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.955895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.955922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.966633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.966667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.979211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.979238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.989557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.989585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:15.999920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:15.999948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:16.010891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:16.010919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.297 [2024-07-12 15:51:16.021298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.297 [2024-07-12 15:51:16.021334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.031661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.031688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.042385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.042412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.052856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.052884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.063261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.063289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.075970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.075997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.085749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.085776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.096132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.096160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.106388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.106415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.116797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.116824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.127344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.127371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.137506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.137534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.148356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.148383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.158891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.158917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.171130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.171164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.180955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.180982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.191715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.191742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.202072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.202099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.212355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.212383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.222728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.222756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.233527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.233554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.243935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.243962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.254030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.254057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.264356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.264394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.555 [2024-07-12 15:51:16.274878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.555 [2024-07-12 15:51:16.274905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.285203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.285230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.295346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.295374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.305535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.305561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.315872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.315899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.328392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.328420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.337648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.337674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.348185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.348212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.358821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.358848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.371461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.371495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.382921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.382949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.391975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.392002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.403421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.403448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.415660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.415687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.425283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.425310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.435601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.435628] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.446094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.446121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.456466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.456493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.466448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.466474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.476561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.476588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.486606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.486634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.496997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.497024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.507214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.507241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.517210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.517237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.527449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.527476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.813 [2024-07-12 15:51:16.537739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.813 [2024-07-12 15:51:16.537767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.547974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.548001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.558380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.558408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.570933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.570967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.581107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.581134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.591247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.591274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.601117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.601144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.611134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.611161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.621593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.621620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.632132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.632159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.642500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.642527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.653131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.653159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.665509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.665536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.674535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.674562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.684978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.685021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.695252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.695279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.705420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.705447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.715816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.715842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.726388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.726415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.737289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.737327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.749132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.749159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.758611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.758638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.769431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.769472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.780051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.780078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.070 [2024-07-12 15:51:16.790365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.070 [2024-07-12 15:51:16.790392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.800469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.800496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.811032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.811059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.823817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.823844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.833272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.833300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.844189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.844216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.854485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.854512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.865140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.865167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.877744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.877770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.887841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.887869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.898416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.898443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.912109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.912137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.921754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.921782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.932068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.932096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.942438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.942464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.952778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.952805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.962980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.963007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.973486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.973512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.986364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.986390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:16.996110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:16.996136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:17.006373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:17.006400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:17.016905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:17.016932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:17.028887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:17.028914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:17.038089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:17.038115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.328 [2024-07-12 15:51:17.048948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.328 [2024-07-12 15:51:17.048975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.060964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.060992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.070148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.070176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.080986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.081013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.093235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.093262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.103291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.103327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.114263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.114291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.126768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.126796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.136757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.136785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.146830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.146858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.157119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.157146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.167446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.167474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.177760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.177788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.188230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.188259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.198529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.198557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.209098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.209125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.220092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.220120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.230620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.230647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.240754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.240782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.251402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.251430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.263691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.263719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.273496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.273523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.284262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.284289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.295219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.295246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.585 [2024-07-12 15:51:17.305855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.585 [2024-07-12 15:51:17.305882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.317904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.317931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.327736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.327763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.338434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.338462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.350919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.350946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.363493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.363520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.373433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.373460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.384475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.384502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.394676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.394703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.404742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.404769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.414912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.414940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.425374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.425401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.435925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.435954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.446077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.446104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.456700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.456728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.467074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.467101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.479366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.479394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.488288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.488323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.498738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.498765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.509383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.509410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.521302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.521336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.530664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.530691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.541588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.541616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.552145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.552172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.843 [2024-07-12 15:51:17.562107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.843 [2024-07-12 15:51:17.562134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.572502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.572536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.583417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.583444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.593749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.593775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.605677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.605704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.615016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.615043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.626134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.626161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.638140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.638167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.647822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.647849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.658363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.658390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.668399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.668426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.678577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.678604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.689164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.689191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.703373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.703400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.713642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.713669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.723852] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.723879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.735990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.736018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.745227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.745254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.756009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.756036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.767145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.767172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.779637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.779672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.789920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.789947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.801325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.801352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.811660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.811687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.101 [2024-07-12 15:51:17.822255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.101 [2024-07-12 15:51:17.822281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.368 [2024-07-12 15:51:17.834612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.368 [2024-07-12 15:51:17.834639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.843864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.843892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.853849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.853876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.864267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.864294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.874550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.874576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.884824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.884851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.895981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.896008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.906514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.906541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.916899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.916926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.927178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.927205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.937106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.937133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.947271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.947298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.957528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.957555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.968080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.968107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.978585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.978624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.988733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.988759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:17.998923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:17.998949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.008830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.008857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.018814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.018856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.028771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.028797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.039135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.039163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.049184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.049212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.059228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.059255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.069645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.069687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.081720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.081746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.369 [2024-07-12 15:51:18.091071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.369 [2024-07-12 15:51:18.091098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.101447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.101474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.113974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.114001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.125393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.125420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.134336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.134364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.145193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.145220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.155730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.155757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.168134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.168161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.177893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.177927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.188278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.188305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.198793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.198819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.211334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.211361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.220895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.220921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.231021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.231048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.241072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.241100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.251497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.251525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.262076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.262104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.272887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.272915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.283295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.283332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.293454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.293482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.304133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.304160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.316972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.317000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.326512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.326539] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.337430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.337468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.625 [2024-07-12 15:51:18.347807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.625 [2024-07-12 15:51:18.347835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.358020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.358047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.368647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.368674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.381553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.381587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.391465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.391492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.401777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.401804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.412284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.412311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.424280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.424307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.433949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.433976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.444266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.444293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.454697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.454724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.467111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.467138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.476670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.476697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.486950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.486977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.497051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.497079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.507303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.507342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.517778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.517806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.530414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.530441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.540119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.540146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.550820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.550847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.563215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.563242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.573134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.573162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.583650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.583677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.593979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.594006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.883 [2024-07-12 15:51:18.603988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.883 [2024-07-12 15:51:18.604015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.613936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.613963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.624118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.624145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.634145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.634172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.644538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.644565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.654629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.654656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.665116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.665143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.675462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.675489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.685425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.685452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.695584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.695611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.706301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.706338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.716620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.716655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.727334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.727361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.737455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.737482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.747889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.747915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.760584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.760611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.770731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.770757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.781172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.781200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.793502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.793530] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.803098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.803125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.813482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.813509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.826685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.826712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.836659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.836687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.847451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.847478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.857936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.857963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.140 [2024-07-12 15:51:18.868263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.140 [2024-07-12 15:51:18.868291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.878992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.879019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.889900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.889927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.900113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.900140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.910498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.910525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.922912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.922938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.932553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.932581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.943238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.943265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.953652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.953678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.397 [2024-07-12 15:51:18.966478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.397 [2024-07-12 15:51:18.966505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:18.976304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:18.976338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:18.986665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:18.986693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:18.996904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:18.996931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.007440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.007467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.017962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.017989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.028048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.028075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.038552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.038579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.048918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.048945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.059389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.059415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.069832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.069859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.080167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.080194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.090493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.090521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.101121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.101147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.111741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.111767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.398 [2024-07-12 15:51:19.123536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.398 [2024-07-12 15:51:19.123563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.132607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.132633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.143356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.143384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.153607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.153634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.166175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.166202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.175809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.175844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.186076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.186103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.196535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.196562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.209069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.209096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.217877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.217904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.228817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.228844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.238965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.238993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.249026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.249053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.259150] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.259177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.269303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.269338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.279620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.279647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.290005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.290032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.300331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.300358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.310461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.310488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.320931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.320959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.333438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.333465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.342485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.342512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.353612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.353640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.365976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.366019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.656 [2024-07-12 15:51:19.376804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.656 [2024-07-12 15:51:19.376838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.385432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.385460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.400856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.400887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.410343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.410371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.420828] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.420869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.433357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.433385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.443178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.443206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.453860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.453888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.464596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.464624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.477204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.477232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.487681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.487708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.498454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.498482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.510915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.510943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.520800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.520828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.530958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.530985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.541444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.541471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.551880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.551907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.562340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.562368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.574584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.574623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.583876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.583910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.594784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.594811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.605207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.605235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.615825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.615852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.627972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.627999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.914 [2024-07-12 15:51:19.637047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.914 [2024-07-12 15:51:19.637076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.648284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.648311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.658527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.658555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.668930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.668956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.679371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.679399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.691528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.691556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.702834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.702862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.711917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.711946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.722616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.722643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.732912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.732939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.743018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.743045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.753226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.753253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.763718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.763744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.774383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.774410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.785149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.785183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.795555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.795582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.806014] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.806041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.816696] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.816723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.827372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.827398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.838039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.838066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.848751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.848778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.859365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.859391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.871480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.871507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.881068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.881096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.172 [2024-07-12 15:51:19.892489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.172 [2024-07-12 15:51:19.892517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.904905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.904933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.913930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.913958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.924776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.924803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.934797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.934824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.944970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.944997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.955356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.955383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.966018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.966045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.976235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.976262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.986620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.986655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:19.997149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:19.997175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.008231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.008266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.020583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.020613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.029638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.029666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.040637] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.040665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.050869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.050896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.061541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.061568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.073910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.073937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.083894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.083921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.094173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.094200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.104344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.104372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.115002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.115029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.125522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.125549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.136076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.136103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.146022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.146049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 [2024-07-12 15:51:20.151078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.430 [2024-07-12 15:51:20.151100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.430 00:14:50.430 Latency(us) 00:14:50.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.430 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:50.430 Nvme1n1 : 5.01 12201.62 95.33 0.00 0.00 10476.62 4514.70 24466.77 00:14:50.430 =================================================================================================================== 00:14:50.430 Total : 12201.62 95.33 0.00 0.00 10476.62 4514.70 24466.77 00:14:50.688 [2024-07-12 15:51:20.159147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.159170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.167129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.167151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.175207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.175248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.183239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.183285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.191257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.191300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.199286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.199358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.207305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.207369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.215345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.215401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.223353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.223397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.231378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.231425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.239399] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.239444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.247429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.247479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.255442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.255489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.263454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.263500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.688 [2024-07-12 15:51:20.271475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.688 [2024-07-12 15:51:20.271522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.279501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.279546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.287517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.287561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.295494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.295518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.303509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.303532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.311530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.311551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.319556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.319578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.327609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.327644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.335652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.335695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.343677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.343722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.351658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.351695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.359688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.359709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.367699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.367719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.375705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.375724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.383783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.383822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.391810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.391853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.399827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.399886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.407797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.407818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.689 [2024-07-12 15:51:20.415816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.689 [2024-07-12 15:51:20.415850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.946 [2024-07-12 15:51:20.423837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.946 [2024-07-12 15:51:20.423857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.946 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (7741) - No such process 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 7741 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.947 delay0 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.947 15:51:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:50.947 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.947 [2024-07-12 15:51:20.495043] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:57.525 Initializing NVMe Controllers 00:14:57.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:57.525 Initialization complete. Launching workers. 00:14:57.525 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 7228 00:14:57.525 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7458, failed to submit 62 00:14:57.525 success 7328, unsuccess 130, failed 0 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.525 rmmod nvme_tcp 00:14:57.525 rmmod nvme_fabrics 00:14:57.525 rmmod nvme_keyring 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.525 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 5946 ']' 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 5946 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 5946 ']' 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 5946 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 5946 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 5946' 00:14:57.526 killing process with pid 5946 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 5946 00:14:57.526 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 5946 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.784 15:51:27 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.324 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.324 00:15:00.324 real 0m28.103s 00:15:00.324 user 0m41.121s 00:15:00.324 sys 0m8.799s 00:15:00.324 15:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.324 15:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:00.324 ************************************ 00:15:00.324 END TEST nvmf_zcopy 00:15:00.324 ************************************ 00:15:00.324 15:51:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:00.324 15:51:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:00.324 15:51:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.324 15:51:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.324 15:51:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.324 ************************************ 00:15:00.324 START TEST nvmf_nmic 00:15:00.324 ************************************ 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:00.324 * Looking for test storage... 00:15:00.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.324 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.325 15:51:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.325 15:51:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.231 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.231 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.231 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.231 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.231 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.231 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.231 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:02.232 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:02.232 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:02.232 Found net devices under 0000:09:00.0: cvl_0_0 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:02.232 Found net devices under 0000:09:00.1: cvl_0_1 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:15:02.232 00:15:02.232 --- 10.0.0.2 ping statistics --- 00:15:02.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.232 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:15:02.232 00:15:02.232 --- 10.0.0.1 ping statistics --- 00:15:02.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.232 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=11142 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 11142 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 11142 ']' 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.232 15:51:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.232 [2024-07-12 15:51:31.915536] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:15:02.232 [2024-07-12 15:51:31.915635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.232 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.490 [2024-07-12 15:51:31.980880] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.490 [2024-07-12 15:51:32.089224] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.490 [2024-07-12 15:51:32.089273] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.490 [2024-07-12 15:51:32.089310] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.490 [2024-07-12 15:51:32.089330] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.490 [2024-07-12 15:51:32.089340] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.490 [2024-07-12 15:51:32.089418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.490 [2024-07-12 15:51:32.089477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.490 [2024-07-12 15:51:32.089546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.490 [2024-07-12 15:51:32.089549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.490 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.490 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:02.490 15:51:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.490 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:02.490 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 [2024-07-12 15:51:32.241016] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 Malloc0 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 [2024-07-12 15:51:32.293149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:02.748 test case1: single bdev can't be used in multiple subsystems 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.748 [2024-07-12 15:51:32.317048] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:02.748 [2024-07-12 15:51:32.317078] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:02.748 [2024-07-12 15:51:32.317093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:02.748 request: 00:15:02.748 { 00:15:02.748 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:02.748 "namespace": { 00:15:02.748 "bdev_name": "Malloc0", 00:15:02.748 "no_auto_visible": false 00:15:02.748 }, 00:15:02.748 "method": "nvmf_subsystem_add_ns", 00:15:02.748 "req_id": 1 00:15:02.748 } 00:15:02.748 Got JSON-RPC error response 00:15:02.748 response: 00:15:02.748 { 00:15:02.748 "code": -32602, 00:15:02.748 "message": "Invalid parameters" 00:15:02.748 } 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:02.748 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:02.749 Adding namespace failed - expected result. 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:02.749 test case2: host connect to nvmf target in multiple paths 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:02.749 [2024-07-12 15:51:32.325146] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.749 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.313 15:51:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:03.877 15:51:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.877 15:51:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.877 15:51:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.877 15:51:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:03.877 15:51:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:06.401 15:51:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:06.401 15:51:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:06.401 15:51:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.401 15:51:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:06.401 15:51:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.401 15:51:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:06.401 15:51:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:06.401 [global] 00:15:06.401 thread=1 00:15:06.401 invalidate=1 00:15:06.401 rw=write 00:15:06.401 time_based=1 00:15:06.401 runtime=1 00:15:06.401 ioengine=libaio 00:15:06.401 direct=1 00:15:06.401 bs=4096 00:15:06.401 iodepth=1 00:15:06.401 norandommap=0 00:15:06.401 numjobs=1 00:15:06.401 00:15:06.401 verify_dump=1 00:15:06.401 verify_backlog=512 00:15:06.401 verify_state_save=0 00:15:06.401 do_verify=1 00:15:06.401 verify=crc32c-intel 00:15:06.401 [job0] 00:15:06.401 filename=/dev/nvme0n1 00:15:06.401 Could not set queue depth (nvme0n1) 00:15:06.401 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.401 fio-3.35 00:15:06.401 Starting 1 thread 00:15:07.334 00:15:07.334 job0: (groupid=0, jobs=1): err= 0: pid=11775: Fri Jul 12 15:51:36 2024 00:15:07.334 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:07.334 slat (nsec): min=5510, max=56789, avg=11914.64, stdev=5873.76 00:15:07.334 clat (usec): min=314, max=637, avg=375.06, stdev=49.26 00:15:07.334 lat (usec): min=320, max=645, avg=386.97, stdev=50.01 00:15:07.334 clat percentiles (usec): 00:15:07.334 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:15:07.334 | 30.00th=[ 351], 40.00th=[ 359], 50.00th=[ 367], 60.00th=[ 371], 00:15:07.334 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 482], 00:15:07.334 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 627], 99.95th=[ 635], 00:15:07.334 | 99.99th=[ 635] 00:15:07.334 write: IOPS=1600, BW=6402KiB/s (6555kB/s)(6408KiB/1001msec); 0 zone resets 00:15:07.334 slat (usec): min=7, max=29958, avg=30.82, stdev=748.23 00:15:07.334 clat (usec): min=168, max=431, avg=215.22, stdev=25.36 00:15:07.334 lat (usec): min=177, max=30211, avg=246.04, stdev=749.68 00:15:07.334 clat percentiles (usec): 00:15:07.334 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 194], 00:15:07.334 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:15:07.334 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 249], 00:15:07.334 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 433], 99.95th=[ 433], 00:15:07.334 | 99.99th=[ 433] 00:15:07.334 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:15:07.334 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:07.334 lat (usec) : 250=48.57%, 500=49.59%, 750=1.85% 00:15:07.334 cpu : usr=2.10%, sys=5.90%, ctx=3141, majf=0, minf=2 00:15:07.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.334 issued rwts: total=1536,1602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.334 00:15:07.334 Run status group 0 (all jobs): 00:15:07.334 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:15:07.334 WRITE: bw=6402KiB/s (6555kB/s), 6402KiB/s-6402KiB/s (6555kB/s-6555kB/s), io=6408KiB (6562kB), run=1001-1001msec 00:15:07.334 00:15:07.334 Disk stats (read/write): 00:15:07.334 nvme0n1: ios=1337/1536, merge=0/0, ticks=1462/318, in_queue=1780, util=98.70% 00:15:07.334 15:51:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:07.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:07.335 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:07.592 rmmod nvme_tcp 00:15:07.592 rmmod nvme_fabrics 00:15:07.592 rmmod nvme_keyring 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 11142 ']' 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 11142 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 11142 ']' 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 11142 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 11142 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 11142' 00:15:07.592 killing process with pid 11142 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 11142 00:15:07.592 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 11142 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.851 15:51:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.387 15:51:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:10.387 00:15:10.387 real 0m9.963s 00:15:10.387 user 0m22.275s 00:15:10.387 sys 0m2.386s 00:15:10.387 15:51:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:10.387 15:51:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:10.387 ************************************ 00:15:10.387 END TEST nvmf_nmic 00:15:10.387 ************************************ 00:15:10.387 15:51:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:10.387 15:51:39 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:10.387 15:51:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:10.387 15:51:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.387 15:51:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.387 ************************************ 00:15:10.387 START TEST nvmf_fio_target 00:15:10.387 ************************************ 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:10.387 * Looking for test storage... 00:15:10.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.387 15:51:39 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:10.388 15:51:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:12.288 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:12.288 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:12.288 Found net devices under 0000:09:00.0: cvl_0_0 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:12.288 Found net devices under 0000:09:00.1: cvl_0_1 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.288 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:15:12.288 00:15:12.288 --- 10.0.0.2 ping statistics --- 00:15:12.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.289 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:15:12.289 00:15:12.289 --- 10.0.0.1 ping statistics --- 00:15:12.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.289 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=13848 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 13848 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 13848 ']' 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:12.289 15:51:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.289 [2024-07-12 15:51:41.886432] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:15:12.289 [2024-07-12 15:51:41.886532] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.289 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.289 [2024-07-12 15:51:41.949061] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.546 [2024-07-12 15:51:42.052772] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.546 [2024-07-12 15:51:42.052820] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.546 [2024-07-12 15:51:42.052844] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.546 [2024-07-12 15:51:42.052855] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.546 [2024-07-12 15:51:42.052864] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.546 [2024-07-12 15:51:42.052964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.546 [2024-07-12 15:51:42.053080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.546 [2024-07-12 15:51:42.053176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.546 [2024-07-12 15:51:42.053183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.546 15:51:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:12.546 15:51:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:12.546 15:51:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.546 15:51:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:12.546 15:51:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.546 15:51:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.546 15:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:12.803 [2024-07-12 15:51:42.427725] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.803 15:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.061 15:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:13.061 15:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.319 15:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:13.319 15:51:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.575 15:51:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:13.575 15:51:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.832 15:51:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:13.832 15:51:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:14.089 15:51:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:14.347 15:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:14.347 15:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:14.638 15:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:14.638 15:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:14.896 15:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:14.896 15:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:15.153 15:51:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:15.409 15:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:15.409 15:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.666 15:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:15.666 15:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.923 15:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:16.180 [2024-07-12 15:51:45.748187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:16.180 15:51:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:16.437 15:51:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:16.694 15:51:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.260 15:51:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:17.260 15:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:17.260 15:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.260 15:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:17.260 15:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:17.260 15:51:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:19.156 15:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:19.156 15:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:19.156 15:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.156 15:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:19.156 15:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.156 15:51:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:19.156 15:51:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:19.413 [global] 00:15:19.413 thread=1 00:15:19.413 invalidate=1 00:15:19.413 rw=write 00:15:19.413 time_based=1 00:15:19.413 runtime=1 00:15:19.413 ioengine=libaio 00:15:19.413 direct=1 00:15:19.413 bs=4096 00:15:19.413 iodepth=1 00:15:19.413 norandommap=0 00:15:19.413 numjobs=1 00:15:19.413 00:15:19.413 verify_dump=1 00:15:19.413 verify_backlog=512 00:15:19.413 verify_state_save=0 00:15:19.413 do_verify=1 00:15:19.413 verify=crc32c-intel 00:15:19.413 [job0] 00:15:19.413 filename=/dev/nvme0n1 00:15:19.413 [job1] 00:15:19.413 filename=/dev/nvme0n2 00:15:19.413 [job2] 00:15:19.413 filename=/dev/nvme0n3 00:15:19.413 [job3] 00:15:19.413 filename=/dev/nvme0n4 00:15:19.413 Could not set queue depth (nvme0n1) 00:15:19.413 Could not set queue depth (nvme0n2) 00:15:19.413 Could not set queue depth (nvme0n3) 00:15:19.413 Could not set queue depth (nvme0n4) 00:15:19.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.413 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.413 fio-3.35 00:15:19.413 Starting 4 threads 00:15:20.789 00:15:20.789 job0: (groupid=0, jobs=1): err= 0: pid=14918: Fri Jul 12 15:51:50 2024 00:15:20.789 read: IOPS=268, BW=1074KiB/s (1100kB/s)(1076KiB/1002msec) 00:15:20.789 slat (nsec): min=6353, max=37366, avg=10821.78, stdev=5178.56 00:15:20.789 clat (usec): min=325, max=42008, avg=3001.62, stdev=9782.20 00:15:20.789 lat (usec): min=336, max=42025, avg=3012.44, stdev=9784.09 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 330], 5.00th=[ 351], 10.00th=[ 367], 20.00th=[ 392], 00:15:20.789 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 457], 60.00th=[ 523], 00:15:20.789 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 652], 95.00th=[40633], 00:15:20.789 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:20.789 | 99.99th=[42206] 00:15:20.789 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:15:20.789 slat (usec): min=9, max=1554, avg=23.66, stdev=71.97 00:15:20.789 clat (usec): min=218, max=850, avg=342.97, stdev=63.42 00:15:20.789 lat (usec): min=228, max=1920, avg=366.63, stdev=96.82 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 241], 5.00th=[ 262], 10.00th=[ 281], 20.00th=[ 293], 00:15:20.789 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 347], 00:15:20.789 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 441], 00:15:20.789 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[ 848], 99.95th=[ 848], 00:15:20.789 | 99.99th=[ 848] 00:15:20.789 bw ( KiB/s): min= 4096, max= 4096, per=24.14%, avg=4096.00, stdev= 0.00, samples=1 00:15:20.789 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:20.789 lat (usec) : 250=1.41%, 500=82.07%, 750=13.83%, 1000=0.51% 00:15:20.789 lat (msec) : 50=2.18% 00:15:20.789 cpu : usr=0.80%, sys=1.70%, ctx=785, majf=0, minf=1 00:15:20.789 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.789 issued rwts: total=269,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.789 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.789 job1: (groupid=0, jobs=1): err= 0: pid=14919: Fri Jul 12 15:51:50 2024 00:15:20.789 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:20.789 slat (nsec): min=6160, max=43360, avg=12866.02, stdev=6840.73 00:15:20.789 clat (usec): min=259, max=41133, avg=525.16, stdev=2200.65 00:15:20.789 lat (usec): min=268, max=41146, avg=538.02, stdev=2200.80 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 269], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 334], 00:15:20.789 | 30.00th=[ 355], 40.00th=[ 379], 50.00th=[ 392], 60.00th=[ 412], 00:15:20.789 | 70.00th=[ 441], 80.00th=[ 474], 90.00th=[ 510], 95.00th=[ 553], 00:15:20.789 | 99.00th=[ 668], 99.50th=[ 693], 99.90th=[41157], 99.95th=[41157], 00:15:20.789 | 99.99th=[41157] 00:15:20.789 write: IOPS=1334, BW=5339KiB/s (5467kB/s)(5344KiB/1001msec); 0 zone resets 00:15:20.789 slat (usec): min=7, max=24547, avg=37.76, stdev=671.12 00:15:20.789 clat (usec): min=176, max=854, avg=290.65, stdev=84.39 00:15:20.789 lat (usec): min=193, max=25026, avg=328.41, stdev=681.95 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 225], 20.00th=[ 231], 00:15:20.789 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 273], 00:15:20.789 | 70.00th=[ 310], 80.00th=[ 359], 90.00th=[ 420], 95.00th=[ 469], 00:15:20.789 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 824], 99.95th=[ 857], 00:15:20.789 | 99.99th=[ 857] 00:15:20.789 bw ( KiB/s): min= 4096, max= 4096, per=24.14%, avg=4096.00, stdev= 0.00, samples=1 00:15:20.789 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:20.789 lat (usec) : 250=25.08%, 500=67.84%, 750=6.82%, 1000=0.08% 00:15:20.789 lat (msec) : 4=0.04%, 50=0.13% 00:15:20.789 cpu : usr=2.40%, sys=4.80%, ctx=2362, majf=0, minf=1 00:15:20.789 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.789 issued rwts: total=1024,1336,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.789 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.789 job2: (groupid=0, jobs=1): err= 0: pid=14920: Fri Jul 12 15:51:50 2024 00:15:20.789 read: IOPS=777, BW=3111KiB/s (3185kB/s)(3232KiB/1039msec) 00:15:20.789 slat (nsec): min=5995, max=64859, avg=12013.94, stdev=8358.21 00:15:20.789 clat (usec): min=386, max=41982, avg=854.57, stdev=3770.83 00:15:20.789 lat (usec): min=393, max=42005, avg=866.59, stdev=3771.68 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 441], 00:15:20.789 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 506], 00:15:20.789 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 611], 95.00th=[ 635], 00:15:20.789 | 99.00th=[ 1090], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:20.789 | 99.99th=[42206] 00:15:20.789 write: IOPS=985, BW=3942KiB/s (4037kB/s)(4096KiB/1039msec); 0 zone resets 00:15:20.789 slat (usec): min=7, max=24516, avg=39.60, stdev=765.70 00:15:20.789 clat (usec): min=201, max=568, avg=283.59, stdev=66.29 00:15:20.789 lat (usec): min=212, max=24928, avg=323.19, stdev=773.03 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 233], 00:15:20.789 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 273], 00:15:20.789 | 70.00th=[ 306], 80.00th=[ 343], 90.00th=[ 383], 95.00th=[ 412], 00:15:20.789 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 570], 99.95th=[ 570], 00:15:20.789 | 99.99th=[ 570] 00:15:20.789 bw ( KiB/s): min= 2672, max= 5520, per=24.14%, avg=4096.00, stdev=2013.84, samples=2 00:15:20.789 iops : min= 668, max= 1380, avg=1024.00, stdev=503.46, samples=2 00:15:20.789 lat (usec) : 250=25.11%, 500=55.13%, 750=19.10%, 1000=0.16% 00:15:20.789 lat (msec) : 2=0.05%, 4=0.05%, 50=0.38% 00:15:20.789 cpu : usr=1.64%, sys=3.37%, ctx=1834, majf=0, minf=2 00:15:20.789 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.789 issued rwts: total=808,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.789 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.789 job3: (groupid=0, jobs=1): err= 0: pid=14921: Fri Jul 12 15:51:50 2024 00:15:20.789 read: IOPS=1512, BW=6050KiB/s (6195kB/s)(6056KiB/1001msec) 00:15:20.789 slat (nsec): min=5675, max=66513, avg=12427.39, stdev=9045.21 00:15:20.789 clat (usec): min=266, max=565, avg=350.94, stdev=39.00 00:15:20.789 lat (usec): min=275, max=572, avg=363.37, stdev=42.33 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 318], 00:15:20.789 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 359], 00:15:20.789 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 412], 00:15:20.789 | 99.00th=[ 465], 99.50th=[ 494], 99.90th=[ 553], 99.95th=[ 570], 00:15:20.789 | 99.99th=[ 570] 00:15:20.789 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:20.789 slat (usec): min=7, max=25607, avg=31.94, stdev=653.08 00:15:20.789 clat (usec): min=180, max=479, avg=253.02, stdev=57.96 00:15:20.789 lat (usec): min=188, max=25960, avg=284.96, stdev=658.65 00:15:20.789 clat percentiles (usec): 00:15:20.789 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 206], 00:15:20.790 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 251], 00:15:20.790 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 343], 95.00th=[ 375], 00:15:20.790 | 99.00th=[ 429], 99.50th=[ 437], 99.90th=[ 465], 99.95th=[ 482], 00:15:20.790 | 99.99th=[ 482] 00:15:20.790 bw ( KiB/s): min= 8192, max= 8192, per=48.27%, avg=8192.00, stdev= 0.00, samples=1 00:15:20.790 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:20.790 lat (usec) : 250=30.16%, 500=69.61%, 750=0.23% 00:15:20.790 cpu : usr=2.60%, sys=5.40%, ctx=3052, majf=0, minf=1 00:15:20.790 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.790 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.790 issued rwts: total=1514,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.790 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.790 00:15:20.790 Run status group 0 (all jobs): 00:15:20.790 READ: bw=13.6MiB/s (14.3MB/s), 1074KiB/s-6050KiB/s (1100kB/s-6195kB/s), io=14.1MiB (14.8MB), run=1001-1039msec 00:15:20.790 WRITE: bw=16.6MiB/s (17.4MB/s), 2044KiB/s-6138KiB/s (2093kB/s-6285kB/s), io=17.2MiB (18.1MB), run=1001-1039msec 00:15:20.790 00:15:20.790 Disk stats (read/write): 00:15:20.790 nvme0n1: ios=318/512, merge=0/0, ticks=842/171, in_queue=1013, util=95.69% 00:15:20.790 nvme0n2: ios=880/1024, merge=0/0, ticks=1362/293, in_queue=1655, util=95.21% 00:15:20.790 nvme0n3: ios=835/1024, merge=0/0, ticks=1412/269, in_queue=1681, util=97.60% 00:15:20.790 nvme0n4: ios=1158/1536, merge=0/0, ticks=1372/357, in_queue=1729, util=98.53% 00:15:20.790 15:51:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:20.790 [global] 00:15:20.790 thread=1 00:15:20.790 invalidate=1 00:15:20.790 rw=randwrite 00:15:20.790 time_based=1 00:15:20.790 runtime=1 00:15:20.790 ioengine=libaio 00:15:20.790 direct=1 00:15:20.790 bs=4096 00:15:20.790 iodepth=1 00:15:20.790 norandommap=0 00:15:20.790 numjobs=1 00:15:20.790 00:15:20.790 verify_dump=1 00:15:20.790 verify_backlog=512 00:15:20.790 verify_state_save=0 00:15:20.790 do_verify=1 00:15:20.790 verify=crc32c-intel 00:15:20.790 [job0] 00:15:20.790 filename=/dev/nvme0n1 00:15:20.790 [job1] 00:15:20.790 filename=/dev/nvme0n2 00:15:20.790 [job2] 00:15:20.790 filename=/dev/nvme0n3 00:15:20.790 [job3] 00:15:20.790 filename=/dev/nvme0n4 00:15:20.790 Could not set queue depth (nvme0n1) 00:15:20.790 Could not set queue depth (nvme0n2) 00:15:20.790 Could not set queue depth (nvme0n3) 00:15:20.790 Could not set queue depth (nvme0n4) 00:15:21.047 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.047 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.047 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.047 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.047 fio-3.35 00:15:21.047 Starting 4 threads 00:15:22.416 00:15:22.416 job0: (groupid=0, jobs=1): err= 0: pid=15147: Fri Jul 12 15:51:51 2024 00:15:22.416 read: IOPS=469, BW=1880KiB/s (1925kB/s)(1936KiB/1030msec) 00:15:22.416 slat (nsec): min=8557, max=33348, avg=11057.23, stdev=4754.90 00:15:22.416 clat (usec): min=314, max=41257, avg=1789.51, stdev=7253.91 00:15:22.416 lat (usec): min=323, max=41267, avg=1800.57, stdev=7256.21 00:15:22.416 clat percentiles (usec): 00:15:22.416 | 1.00th=[ 383], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 437], 00:15:22.416 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 449], 60.00th=[ 457], 00:15:22.416 | 70.00th=[ 461], 80.00th=[ 469], 90.00th=[ 482], 95.00th=[ 510], 00:15:22.416 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:22.416 | 99.99th=[41157] 00:15:22.416 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:15:22.416 slat (nsec): min=6412, max=75759, avg=16540.91, stdev=8640.58 00:15:22.416 clat (usec): min=201, max=458, avg=284.67, stdev=49.89 00:15:22.416 lat (usec): min=210, max=480, avg=301.21, stdev=52.21 00:15:22.416 clat percentiles (usec): 00:15:22.416 | 1.00th=[ 215], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 253], 00:15:22.416 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:15:22.416 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 367], 95.00th=[ 400], 00:15:22.416 | 99.00th=[ 437], 99.50th=[ 453], 99.90th=[ 457], 99.95th=[ 457], 00:15:22.416 | 99.99th=[ 457] 00:15:22.416 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:15:22.416 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:22.416 lat (usec) : 250=8.84%, 500=87.75%, 750=1.81% 00:15:22.416 lat (msec) : 50=1.61% 00:15:22.416 cpu : usr=0.68%, sys=1.36%, ctx=996, majf=0, minf=1 00:15:22.416 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.416 issued rwts: total=484,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.416 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.416 job1: (groupid=0, jobs=1): err= 0: pid=15148: Fri Jul 12 15:51:51 2024 00:15:22.416 read: IOPS=34, BW=137KiB/s (140kB/s)(140KiB/1024msec) 00:15:22.416 slat (nsec): min=6966, max=53988, avg=20727.43, stdev=11301.27 00:15:22.416 clat (usec): min=360, max=42436, avg=25180.65, stdev=20484.52 00:15:22.416 lat (usec): min=373, max=42449, avg=25201.38, stdev=20484.50 00:15:22.416 clat percentiles (usec): 00:15:22.416 | 1.00th=[ 359], 5.00th=[ 367], 10.00th=[ 424], 20.00th=[ 437], 00:15:22.416 | 30.00th=[ 502], 40.00th=[ 562], 50.00th=[41157], 60.00th=[41157], 00:15:22.416 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:22.416 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:22.416 | 99.99th=[42206] 00:15:22.416 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:15:22.416 slat (nsec): min=5806, max=37234, avg=14082.00, stdev=6459.64 00:15:22.416 clat (usec): min=214, max=511, avg=258.88, stdev=40.41 00:15:22.416 lat (usec): min=222, max=521, avg=272.96, stdev=41.29 00:15:22.416 clat percentiles (usec): 00:15:22.416 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:15:22.416 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:15:22.416 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 367], 00:15:22.416 | 99.00th=[ 449], 99.50th=[ 465], 99.90th=[ 510], 99.95th=[ 510], 00:15:22.416 | 99.99th=[ 510] 00:15:22.416 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:15:22.416 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:22.416 lat (usec) : 250=48.63%, 500=46.25%, 750=1.28% 00:15:22.416 lat (msec) : 50=3.84% 00:15:22.417 cpu : usr=0.39%, sys=0.78%, ctx=547, majf=0, minf=2 00:15:22.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.417 issued rwts: total=35,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.417 job2: (groupid=0, jobs=1): err= 0: pid=15151: Fri Jul 12 15:51:51 2024 00:15:22.417 read: IOPS=75, BW=303KiB/s (310kB/s)(312KiB/1030msec) 00:15:22.417 slat (nsec): min=5830, max=38083, avg=11927.90, stdev=9576.84 00:15:22.417 clat (usec): min=360, max=41189, avg=11349.39, stdev=18100.06 00:15:22.417 lat (usec): min=367, max=41198, avg=11361.31, stdev=18106.93 00:15:22.417 clat percentiles (usec): 00:15:22.417 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 392], 00:15:22.417 | 30.00th=[ 453], 40.00th=[ 457], 50.00th=[ 461], 60.00th=[ 465], 00:15:22.417 | 70.00th=[ 478], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:22.417 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:22.417 | 99.99th=[41157] 00:15:22.417 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:15:22.417 slat (nsec): min=7992, max=56196, avg=18839.96, stdev=7981.80 00:15:22.417 clat (usec): min=209, max=355, avg=255.80, stdev=16.20 00:15:22.417 lat (usec): min=225, max=388, avg=274.64, stdev=19.41 00:15:22.417 clat percentiles (usec): 00:15:22.417 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:15:22.417 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 260], 00:15:22.417 | 70.00th=[ 262], 80.00th=[ 265], 90.00th=[ 269], 95.00th=[ 281], 00:15:22.417 | 99.00th=[ 318], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 355], 00:15:22.417 | 99.99th=[ 355] 00:15:22.417 bw ( KiB/s): min= 4096, max= 4096, per=41.20%, avg=4096.00, stdev= 0.00, samples=1 00:15:22.417 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:22.417 lat (usec) : 250=29.32%, 500=67.12% 00:15:22.417 lat (msec) : 50=3.56% 00:15:22.417 cpu : usr=0.68%, sys=1.36%, ctx=591, majf=0, minf=1 00:15:22.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.417 issued rwts: total=78,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.417 job3: (groupid=0, jobs=1): err= 0: pid=15152: Fri Jul 12 15:51:51 2024 00:15:22.417 read: IOPS=514, BW=2059KiB/s (2108kB/s)(2108KiB/1024msec) 00:15:22.417 slat (nsec): min=4790, max=36195, avg=12912.82, stdev=3904.18 00:15:22.417 clat (usec): min=356, max=41999, avg=1390.18, stdev=6162.01 00:15:22.417 lat (usec): min=361, max=42017, avg=1403.09, stdev=6164.26 00:15:22.417 clat percentiles (usec): 00:15:22.417 | 1.00th=[ 367], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 420], 00:15:22.417 | 30.00th=[ 429], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 457], 00:15:22.417 | 70.00th=[ 465], 80.00th=[ 478], 90.00th=[ 502], 95.00th=[ 537], 00:15:22.417 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:22.417 | 99.99th=[42206] 00:15:22.417 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:15:22.417 slat (nsec): min=5996, max=61474, avg=13181.08, stdev=9128.63 00:15:22.417 clat (usec): min=182, max=1189, avg=258.98, stdev=73.51 00:15:22.417 lat (usec): min=189, max=1195, avg=272.16, stdev=77.05 00:15:22.417 clat percentiles (usec): 00:15:22.417 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:15:22.417 | 30.00th=[ 225], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:15:22.417 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 334], 95.00th=[ 383], 00:15:22.417 | 99.00th=[ 465], 99.50th=[ 494], 99.90th=[ 1123], 99.95th=[ 1188], 00:15:22.417 | 99.99th=[ 1188] 00:15:22.417 bw ( KiB/s): min= 632, max= 7560, per=41.20%, avg=4096.00, stdev=4898.84, samples=2 00:15:22.417 iops : min= 158, max= 1890, avg=1024.00, stdev=1224.71, samples=2 00:15:22.417 lat (usec) : 250=36.81%, 500=59.32%, 750=2.84%, 1000=0.06% 00:15:22.417 lat (msec) : 2=0.19%, 50=0.77% 00:15:22.417 cpu : usr=1.17%, sys=2.05%, ctx=1553, majf=0, minf=1 00:15:22.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.417 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.417 00:15:22.417 Run status group 0 (all jobs): 00:15:22.417 READ: bw=4365KiB/s (4470kB/s), 137KiB/s-2059KiB/s (140kB/s-2108kB/s), io=4496KiB (4604kB), run=1024-1030msec 00:15:22.417 WRITE: bw=9942KiB/s (10.2MB/s), 1988KiB/s-4000KiB/s (2036kB/s-4096kB/s), io=10.0MiB (10.5MB), run=1024-1030msec 00:15:22.417 00:15:22.417 Disk stats (read/write): 00:15:22.417 nvme0n1: ios=522/512, merge=0/0, ticks=775/137, in_queue=912, util=91.28% 00:15:22.417 nvme0n2: ios=75/512, merge=0/0, ticks=750/130, in_queue=880, util=92.58% 00:15:22.417 nvme0n3: ios=99/512, merge=0/0, ticks=1643/123, in_queue=1766, util=98.54% 00:15:22.417 nvme0n4: ios=542/1024, merge=0/0, ticks=1457/259, in_queue=1716, util=97.79% 00:15:22.417 15:51:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:22.417 [global] 00:15:22.417 thread=1 00:15:22.417 invalidate=1 00:15:22.417 rw=write 00:15:22.417 time_based=1 00:15:22.417 runtime=1 00:15:22.417 ioengine=libaio 00:15:22.417 direct=1 00:15:22.417 bs=4096 00:15:22.417 iodepth=128 00:15:22.417 norandommap=0 00:15:22.417 numjobs=1 00:15:22.417 00:15:22.417 verify_dump=1 00:15:22.417 verify_backlog=512 00:15:22.417 verify_state_save=0 00:15:22.417 do_verify=1 00:15:22.417 verify=crc32c-intel 00:15:22.417 [job0] 00:15:22.417 filename=/dev/nvme0n1 00:15:22.417 [job1] 00:15:22.417 filename=/dev/nvme0n2 00:15:22.417 [job2] 00:15:22.417 filename=/dev/nvme0n3 00:15:22.417 [job3] 00:15:22.417 filename=/dev/nvme0n4 00:15:22.417 Could not set queue depth (nvme0n1) 00:15:22.417 Could not set queue depth (nvme0n2) 00:15:22.417 Could not set queue depth (nvme0n3) 00:15:22.417 Could not set queue depth (nvme0n4) 00:15:22.417 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.417 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.417 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.417 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.417 fio-3.35 00:15:22.417 Starting 4 threads 00:15:23.794 00:15:23.794 job0: (groupid=0, jobs=1): err= 0: pid=15385: Fri Jul 12 15:51:53 2024 00:15:23.794 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:15:23.794 slat (usec): min=3, max=7334, avg=83.65, stdev=473.30 00:15:23.794 clat (usec): min=5717, max=20772, avg=11028.79, stdev=1779.20 00:15:23.794 lat (usec): min=5746, max=20792, avg=11112.44, stdev=1809.71 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[ 7504], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10028], 00:15:23.794 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10945], 00:15:23.794 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13698], 95.00th=[14484], 00:15:23.794 | 99.00th=[15533], 99.50th=[18482], 99.90th=[20317], 99.95th=[20317], 00:15:23.794 | 99.99th=[20841] 00:15:23.794 write: IOPS=5816, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1002msec); 0 zone resets 00:15:23.794 slat (usec): min=4, max=12431, avg=82.63, stdev=505.94 00:15:23.794 clat (usec): min=334, max=19674, avg=10969.78, stdev=2210.02 00:15:23.794 lat (usec): min=3594, max=24474, avg=11052.41, stdev=2230.45 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[ 5080], 5.00th=[ 7767], 10.00th=[ 9110], 20.00th=[ 9896], 00:15:23.794 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:15:23.794 | 70.00th=[11076], 80.00th=[12387], 90.00th=[13698], 95.00th=[14615], 00:15:23.794 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:15:23.794 | 99.99th=[19792] 00:15:23.794 bw ( KiB/s): min=21744, max=23856, per=34.78%, avg=22800.00, stdev=1493.41, samples=2 00:15:23.794 iops : min= 5436, max= 5964, avg=5700.00, stdev=373.35, samples=2 00:15:23.794 lat (usec) : 500=0.01% 00:15:23.794 lat (msec) : 4=0.15%, 10=21.06%, 20=78.64%, 50=0.14% 00:15:23.794 cpu : usr=5.39%, sys=10.19%, ctx=511, majf=0, minf=9 00:15:23.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:23.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.794 issued rwts: total=5632,5828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.794 job1: (groupid=0, jobs=1): err= 0: pid=15386: Fri Jul 12 15:51:53 2024 00:15:23.794 read: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1002msec) 00:15:23.794 slat (usec): min=3, max=5823, avg=101.81, stdev=460.36 00:15:23.794 clat (usec): min=660, max=18523, avg=13435.18, stdev=1688.91 00:15:23.794 lat (usec): min=3035, max=18527, avg=13536.99, stdev=1649.28 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[ 6521], 5.00th=[10945], 10.00th=[11863], 20.00th=[12649], 00:15:23.794 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:15:23.794 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15533], 00:15:23.794 | 99.00th=[17957], 99.50th=[18482], 99.90th=[18482], 99.95th=[18482], 00:15:23.794 | 99.99th=[18482] 00:15:23.794 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:15:23.794 slat (usec): min=4, max=18557, avg=111.31, stdev=520.76 00:15:23.794 clat (usec): min=8896, max=32692, avg=14293.50, stdev=1947.57 00:15:23.794 lat (usec): min=9851, max=32717, avg=14404.82, stdev=1926.52 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[10945], 5.00th=[11863], 10.00th=[12649], 20.00th=[13304], 00:15:23.794 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:15:23.794 | 70.00th=[14615], 80.00th=[15008], 90.00th=[15795], 95.00th=[16581], 00:15:23.794 | 99.00th=[18482], 99.50th=[29492], 99.90th=[32637], 99.95th=[32637], 00:15:23.794 | 99.99th=[32637] 00:15:23.794 bw ( KiB/s): min=17288, max=19576, per=28.12%, avg=18432.00, stdev=1617.86, samples=2 00:15:23.794 iops : min= 4322, max= 4894, avg=4608.00, stdev=404.47, samples=2 00:15:23.794 lat (usec) : 750=0.01% 00:15:23.794 lat (msec) : 4=0.36%, 10=0.77%, 20=98.51%, 50=0.36% 00:15:23.794 cpu : usr=6.09%, sys=7.39%, ctx=575, majf=0, minf=17 00:15:23.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:23.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.794 issued rwts: total=4377,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.794 job2: (groupid=0, jobs=1): err= 0: pid=15387: Fri Jul 12 15:51:53 2024 00:15:23.794 read: IOPS=2055, BW=8222KiB/s (8420kB/s)(8576KiB/1043msec) 00:15:23.794 slat (usec): min=2, max=14670, avg=229.95, stdev=1265.40 00:15:23.794 clat (usec): min=4271, max=78094, avg=29819.55, stdev=18778.24 00:15:23.794 lat (usec): min=4286, max=78122, avg=30049.50, stdev=18896.98 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[ 5735], 5.00th=[10814], 10.00th=[10945], 20.00th=[12649], 00:15:23.794 | 30.00th=[13173], 40.00th=[15008], 50.00th=[22414], 60.00th=[39060], 00:15:23.794 | 70.00th=[45351], 80.00th=[50594], 90.00th=[55313], 95.00th=[58983], 00:15:23.794 | 99.00th=[70779], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:15:23.794 | 99.99th=[78119] 00:15:23.794 write: IOPS=2454, BW=9818KiB/s (10.1MB/s)(10.0MiB/1043msec); 0 zone resets 00:15:23.794 slat (usec): min=3, max=10356, avg=186.32, stdev=965.12 00:15:23.794 clat (usec): min=1145, max=92251, avg=26606.36, stdev=17474.99 00:15:23.794 lat (usec): min=1154, max=92266, avg=26792.67, stdev=17579.27 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[ 3392], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[11469], 00:15:23.794 | 30.00th=[13435], 40.00th=[17171], 50.00th=[21365], 60.00th=[25560], 00:15:23.794 | 70.00th=[30278], 80.00th=[47449], 90.00th=[53740], 95.00th=[56361], 00:15:23.794 | 99.00th=[84411], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:15:23.794 | 99.99th=[91751] 00:15:23.794 bw ( KiB/s): min= 7768, max=12464, per=15.43%, avg=10116.00, stdev=3320.57, samples=2 00:15:23.794 iops : min= 1942, max= 3116, avg=2529.00, stdev=830.14, samples=2 00:15:23.794 lat (msec) : 2=0.13%, 4=0.45%, 10=7.57%, 20=40.37%, 50=33.40% 00:15:23.794 lat (msec) : 100=18.09% 00:15:23.794 cpu : usr=1.73%, sys=2.88%, ctx=248, majf=0, minf=9 00:15:23.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:23.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.794 issued rwts: total=2144,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.794 job3: (groupid=0, jobs=1): err= 0: pid=15388: Fri Jul 12 15:51:53 2024 00:15:23.794 read: IOPS=3772, BW=14.7MiB/s (15.5MB/s)(14.8MiB/1002msec) 00:15:23.794 slat (usec): min=3, max=8736, avg=128.51, stdev=733.14 00:15:23.794 clat (usec): min=1536, max=29800, avg=16868.98, stdev=3130.38 00:15:23.794 lat (usec): min=1553, max=30481, avg=16997.49, stdev=3171.21 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[ 9372], 5.00th=[12256], 10.00th=[13566], 20.00th=[14877], 00:15:23.794 | 30.00th=[15401], 40.00th=[15664], 50.00th=[16319], 60.00th=[17171], 00:15:23.794 | 70.00th=[18220], 80.00th=[19006], 90.00th=[21103], 95.00th=[22676], 00:15:23.794 | 99.00th=[24773], 99.50th=[26346], 99.90th=[26346], 99.95th=[27657], 00:15:23.794 | 99.99th=[29754] 00:15:23.794 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:15:23.794 slat (usec): min=4, max=8866, avg=117.04, stdev=769.86 00:15:23.794 clat (usec): min=7651, max=28480, avg=15288.37, stdev=2236.01 00:15:23.794 lat (usec): min=8192, max=28489, avg=15405.42, stdev=2334.19 00:15:23.794 clat percentiles (usec): 00:15:23.794 | 1.00th=[10159], 5.00th=[12518], 10.00th=[13173], 20.00th=[13960], 00:15:23.794 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14484], 60.00th=[14746], 00:15:23.794 | 70.00th=[16450], 80.00th=[17433], 90.00th=[17957], 95.00th=[19530], 00:15:23.794 | 99.00th=[21627], 99.50th=[22676], 99.90th=[25822], 99.95th=[26084], 00:15:23.794 | 99.99th=[28443] 00:15:23.794 bw ( KiB/s): min=16384, max=16384, per=24.99%, avg=16384.00, stdev= 0.00, samples=2 00:15:23.794 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:15:23.794 lat (msec) : 2=0.09%, 10=1.05%, 20=89.36%, 50=9.50% 00:15:23.794 cpu : usr=4.30%, sys=6.69%, ctx=259, majf=0, minf=15 00:15:23.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:23.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:23.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:23.794 issued rwts: total=3780,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:23.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:23.794 00:15:23.794 Run status group 0 (all jobs): 00:15:23.794 READ: bw=59.7MiB/s (62.6MB/s), 8222KiB/s-22.0MiB/s (8420kB/s-23.0MB/s), io=62.2MiB (65.3MB), run=1002-1043msec 00:15:23.794 WRITE: bw=64.0MiB/s (67.1MB/s), 9818KiB/s-22.7MiB/s (10.1MB/s-23.8MB/s), io=66.8MiB (70.0MB), run=1002-1043msec 00:15:23.794 00:15:23.794 Disk stats (read/write): 00:15:23.795 nvme0n1: ios=4649/4943, merge=0/0, ticks=25942/25410, in_queue=51352, util=97.70% 00:15:23.795 nvme0n2: ios=3624/3966, merge=0/0, ticks=12384/13761, in_queue=26145, util=97.97% 00:15:23.795 nvme0n3: ios=2096/2247, merge=0/0, ticks=25078/20388, in_queue=45466, util=97.06% 00:15:23.795 nvme0n4: ios=3095/3484, merge=0/0, ticks=27279/24899, in_queue=52178, util=97.04% 00:15:23.795 15:51:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:23.795 [global] 00:15:23.795 thread=1 00:15:23.795 invalidate=1 00:15:23.795 rw=randwrite 00:15:23.795 time_based=1 00:15:23.795 runtime=1 00:15:23.795 ioengine=libaio 00:15:23.795 direct=1 00:15:23.795 bs=4096 00:15:23.795 iodepth=128 00:15:23.795 norandommap=0 00:15:23.795 numjobs=1 00:15:23.795 00:15:23.795 verify_dump=1 00:15:23.795 verify_backlog=512 00:15:23.795 verify_state_save=0 00:15:23.795 do_verify=1 00:15:23.795 verify=crc32c-intel 00:15:23.795 [job0] 00:15:23.795 filename=/dev/nvme0n1 00:15:23.795 [job1] 00:15:23.795 filename=/dev/nvme0n2 00:15:23.795 [job2] 00:15:23.795 filename=/dev/nvme0n3 00:15:23.795 [job3] 00:15:23.795 filename=/dev/nvme0n4 00:15:23.795 Could not set queue depth (nvme0n1) 00:15:23.795 Could not set queue depth (nvme0n2) 00:15:23.795 Could not set queue depth (nvme0n3) 00:15:23.795 Could not set queue depth (nvme0n4) 00:15:24.051 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.051 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.051 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.051 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.051 fio-3.35 00:15:24.051 Starting 4 threads 00:15:25.422 00:15:25.422 job0: (groupid=0, jobs=1): err= 0: pid=15732: Fri Jul 12 15:51:54 2024 00:15:25.422 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:15:25.422 slat (usec): min=3, max=24173, avg=209.41, stdev=1315.49 00:15:25.422 clat (usec): min=9159, max=78660, avg=25465.57, stdev=15637.72 00:15:25.422 lat (usec): min=9427, max=78675, avg=25674.98, stdev=15701.08 00:15:25.422 clat percentiles (usec): 00:15:25.422 | 1.00th=[10159], 5.00th=[11863], 10.00th=[12125], 20.00th=[12911], 00:15:25.422 | 30.00th=[14353], 40.00th=[16319], 50.00th=[19792], 60.00th=[23725], 00:15:25.422 | 70.00th=[28443], 80.00th=[39584], 90.00th=[51119], 95.00th=[60031], 00:15:25.422 | 99.00th=[77071], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 00:15:25.422 | 99.99th=[79168] 00:15:25.422 write: IOPS=2967, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1002msec); 0 zone resets 00:15:25.422 slat (usec): min=4, max=20585, avg=147.73, stdev=971.63 00:15:25.422 clat (usec): min=317, max=76957, avg=20461.19, stdev=13019.19 00:15:25.422 lat (usec): min=3709, max=76965, avg=20608.93, stdev=13060.99 00:15:25.422 clat percentiles (usec): 00:15:25.422 | 1.00th=[ 5407], 5.00th=[10814], 10.00th=[12125], 20.00th=[13042], 00:15:25.422 | 30.00th=[13829], 40.00th=[14222], 50.00th=[14877], 60.00th=[16909], 00:15:25.422 | 70.00th=[19530], 80.00th=[25035], 90.00th=[36439], 95.00th=[50070], 00:15:25.422 | 99.00th=[77071], 99.50th=[77071], 99.90th=[77071], 99.95th=[77071], 00:15:25.422 | 99.99th=[77071] 00:15:25.422 bw ( KiB/s): min=11104, max=11656, per=16.14%, avg=11380.00, stdev=390.32, samples=2 00:15:25.422 iops : min= 2776, max= 2914, avg=2845.00, stdev=97.58, samples=2 00:15:25.422 lat (usec) : 500=0.02% 00:15:25.422 lat (msec) : 4=0.05%, 10=1.81%, 20=60.15%, 50=30.76%, 100=7.21% 00:15:25.422 cpu : usr=3.30%, sys=5.09%, ctx=335, majf=0, minf=1 00:15:25.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:25.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.422 issued rwts: total=2560,2973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.422 job1: (groupid=0, jobs=1): err= 0: pid=15733: Fri Jul 12 15:51:54 2024 00:15:25.422 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:15:25.422 slat (usec): min=3, max=5741, avg=82.77, stdev=460.01 00:15:25.422 clat (usec): min=6688, max=17520, avg=11123.51, stdev=1272.96 00:15:25.422 lat (usec): min=7620, max=17525, avg=11206.27, stdev=1324.14 00:15:25.422 clat percentiles (usec): 00:15:25.422 | 1.00th=[ 8225], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10290], 00:15:25.422 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:15:25.422 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[13435], 00:15:25.422 | 99.00th=[15139], 99.50th=[15270], 99.90th=[16581], 99.95th=[16909], 00:15:25.422 | 99.99th=[17433] 00:15:25.422 write: IOPS=5887, BW=23.0MiB/s (24.1MB/s)(23.0MiB/1002msec); 0 zone resets 00:15:25.422 slat (usec): min=4, max=11140, avg=81.92, stdev=540.28 00:15:25.422 clat (usec): min=1968, max=20546, avg=10926.90, stdev=1729.78 00:15:25.422 lat (usec): min=1976, max=20566, avg=11008.81, stdev=1752.63 00:15:25.422 clat percentiles (usec): 00:15:25.422 | 1.00th=[ 6456], 5.00th=[ 7701], 10.00th=[ 9634], 20.00th=[10159], 00:15:25.422 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:15:25.422 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[13566], 00:15:25.422 | 99.00th=[16450], 99.50th=[17171], 99.90th=[20579], 99.95th=[20579], 00:15:25.422 | 99.99th=[20579] 00:15:25.422 bw ( KiB/s): min=22224, max=24000, per=32.78%, avg=23112.00, stdev=1255.82, samples=2 00:15:25.422 iops : min= 5556, max= 6000, avg=5778.00, stdev=313.96, samples=2 00:15:25.422 lat (msec) : 2=0.03%, 4=0.24%, 10=15.64%, 20=83.95%, 50=0.14% 00:15:25.422 cpu : usr=6.49%, sys=9.59%, ctx=348, majf=0, minf=1 00:15:25.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:25.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.422 issued rwts: total=5632,5899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.422 job2: (groupid=0, jobs=1): err= 0: pid=15734: Fri Jul 12 15:51:54 2024 00:15:25.422 read: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1007msec) 00:15:25.422 slat (usec): min=3, max=15703, avg=127.99, stdev=707.36 00:15:25.422 clat (usec): min=766, max=34418, avg=15827.57, stdev=4079.77 00:15:25.422 lat (usec): min=9642, max=38289, avg=15955.57, stdev=4090.13 00:15:25.422 clat percentiles (usec): 00:15:25.422 | 1.00th=[10421], 5.00th=[11338], 10.00th=[12125], 20.00th=[13042], 00:15:25.422 | 30.00th=[13698], 40.00th=[14484], 50.00th=[14877], 60.00th=[15401], 00:15:25.422 | 70.00th=[16319], 80.00th=[17433], 90.00th=[20317], 95.00th=[24773], 00:15:25.422 | 99.00th=[32113], 99.50th=[32375], 99.90th=[34341], 99.95th=[34341], 00:15:25.422 | 99.99th=[34341] 00:15:25.422 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:15:25.422 slat (usec): min=4, max=22890, avg=122.84, stdev=817.27 00:15:25.422 clat (usec): min=6590, max=79136, avg=16995.16, stdev=9219.63 00:15:25.422 lat (usec): min=6603, max=79160, avg=17118.00, stdev=9276.28 00:15:25.422 clat percentiles (usec): 00:15:25.422 | 1.00th=[ 9765], 5.00th=[11863], 10.00th=[12256], 20.00th=[12911], 00:15:25.422 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[15139], 00:15:25.422 | 70.00th=[16057], 80.00th=[17695], 90.00th=[19792], 95.00th=[32900], 00:15:25.422 | 99.00th=[63701], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:15:25.422 | 99.99th=[79168] 00:15:25.422 bw ( KiB/s): min=14328, max=17912, per=22.86%, avg=16120.00, stdev=2534.27, samples=2 00:15:25.422 iops : min= 3582, max= 4478, avg=4030.00, stdev=633.57, samples=2 00:15:25.422 lat (usec) : 1000=0.01% 00:15:25.422 lat (msec) : 10=1.11%, 20=88.66%, 50=8.83%, 100=1.38% 00:15:25.422 cpu : usr=4.97%, sys=6.56%, ctx=366, majf=0, minf=1 00:15:25.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:25.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.422 issued rwts: total=3646,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.423 job3: (groupid=0, jobs=1): err= 0: pid=15735: Fri Jul 12 15:51:54 2024 00:15:25.423 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:15:25.423 slat (usec): min=2, max=12718, avg=99.31, stdev=783.58 00:15:25.423 clat (usec): min=1015, max=35865, avg=14217.84, stdev=5608.98 00:15:25.423 lat (usec): min=1019, max=35888, avg=14317.14, stdev=5647.26 00:15:25.423 clat percentiles (usec): 00:15:25.423 | 1.00th=[ 1254], 5.00th=[ 3818], 10.00th=[ 6849], 20.00th=[11076], 00:15:25.423 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14091], 60.00th=[15008], 00:15:25.423 | 70.00th=[16057], 80.00th=[17433], 90.00th=[20579], 95.00th=[23725], 00:15:25.423 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:15:25.423 | 99.99th=[35914] 00:15:25.423 write: IOPS=4749, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1007msec); 0 zone resets 00:15:25.423 slat (usec): min=3, max=11866, avg=87.18, stdev=639.53 00:15:25.423 clat (usec): min=1006, max=53971, avg=13021.66, stdev=6869.61 00:15:25.423 lat (usec): min=1013, max=53976, avg=13108.84, stdev=6891.36 00:15:25.423 clat percentiles (usec): 00:15:25.423 | 1.00th=[ 1893], 5.00th=[ 3720], 10.00th=[ 5932], 20.00th=[ 8848], 00:15:25.423 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12387], 60.00th=[12911], 00:15:25.423 | 70.00th=[13566], 80.00th=[15139], 90.00th=[20579], 95.00th=[23987], 00:15:25.423 | 99.00th=[41681], 99.50th=[48497], 99.90th=[53740], 99.95th=[53740], 00:15:25.423 | 99.99th=[53740] 00:15:25.423 bw ( KiB/s): min=16800, max=20440, per=26.41%, avg=18620.00, stdev=2573.87, samples=2 00:15:25.423 iops : min= 4200, max= 5110, avg=4655.00, stdev=643.47, samples=2 00:15:25.423 lat (msec) : 2=2.18%, 4=3.51%, 10=14.68%, 20=68.50%, 50=10.96% 00:15:25.423 lat (msec) : 100=0.16% 00:15:25.423 cpu : usr=3.18%, sys=5.37%, ctx=415, majf=0, minf=1 00:15:25.423 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:25.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.423 issued rwts: total=4608,4783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.423 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.423 00:15:25.423 Run status group 0 (all jobs): 00:15:25.423 READ: bw=63.8MiB/s (66.9MB/s), 9.98MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=64.2MiB (67.4MB), run=1002-1007msec 00:15:25.423 WRITE: bw=68.9MiB/s (72.2MB/s), 11.6MiB/s-23.0MiB/s (12.2MB/s-24.1MB/s), io=69.3MiB (72.7MB), run=1002-1007msec 00:15:25.423 00:15:25.423 Disk stats (read/write): 00:15:25.423 nvme0n1: ios=2042/2048, merge=0/0, ticks=15701/11717, in_queue=27418, util=97.70% 00:15:25.423 nvme0n2: ios=4647/5120, merge=0/0, ticks=25379/26952, in_queue=52331, util=98.37% 00:15:25.423 nvme0n3: ios=3570/3584, merge=0/0, ticks=19269/16479, in_queue=35748, util=97.18% 00:15:25.423 nvme0n4: ios=3888/4096, merge=0/0, ticks=42775/41133, in_queue=83908, util=90.97% 00:15:25.423 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:25.423 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=15871 00:15:25.423 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:25.423 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:25.423 [global] 00:15:25.423 thread=1 00:15:25.423 invalidate=1 00:15:25.423 rw=read 00:15:25.423 time_based=1 00:15:25.423 runtime=10 00:15:25.423 ioengine=libaio 00:15:25.423 direct=1 00:15:25.423 bs=4096 00:15:25.423 iodepth=1 00:15:25.423 norandommap=1 00:15:25.423 numjobs=1 00:15:25.423 00:15:25.423 [job0] 00:15:25.423 filename=/dev/nvme0n1 00:15:25.423 [job1] 00:15:25.423 filename=/dev/nvme0n2 00:15:25.423 [job2] 00:15:25.423 filename=/dev/nvme0n3 00:15:25.423 [job3] 00:15:25.423 filename=/dev/nvme0n4 00:15:25.423 Could not set queue depth (nvme0n1) 00:15:25.423 Could not set queue depth (nvme0n2) 00:15:25.423 Could not set queue depth (nvme0n3) 00:15:25.423 Could not set queue depth (nvme0n4) 00:15:25.423 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.423 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.423 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.423 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.423 fio-3.35 00:15:25.423 Starting 4 threads 00:15:28.695 15:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:28.695 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:28.695 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=294912, buflen=4096 00:15:28.695 fio: pid=15968, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:28.695 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:28.695 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:28.695 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8286208, buflen=4096 00:15:28.695 fio: pid=15967, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:28.952 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:28.952 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:28.952 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=20672512, buflen=4096 00:15:28.952 fio: pid=15965, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:29.210 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=39350272, buflen=4096 00:15:29.210 fio: pid=15966, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:29.210 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.210 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:29.210 00:15:29.210 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=15965: Fri Jul 12 15:51:58 2024 00:15:29.210 read: IOPS=1437, BW=5748KiB/s (5886kB/s)(19.7MiB/3512msec) 00:15:29.210 slat (usec): min=4, max=26535, avg=30.83, stdev=469.23 00:15:29.210 clat (usec): min=251, max=41956, avg=655.60, stdev=3117.99 00:15:29.210 lat (usec): min=262, max=41971, avg=683.35, stdev=3144.85 00:15:29.210 clat percentiles (usec): 00:15:29.210 | 1.00th=[ 285], 5.00th=[ 322], 10.00th=[ 347], 20.00th=[ 375], 00:15:29.210 | 30.00th=[ 388], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 424], 00:15:29.210 | 70.00th=[ 437], 80.00th=[ 465], 90.00th=[ 486], 95.00th=[ 502], 00:15:29.210 | 99.00th=[ 586], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:15:29.210 | 99.99th=[42206] 00:15:29.210 bw ( KiB/s): min= 112, max= 9296, per=36.38%, avg=6490.67, stdev=3603.08, samples=6 00:15:29.210 iops : min= 28, max= 2324, avg=1622.67, stdev=900.77, samples=6 00:15:29.210 lat (usec) : 500=94.45%, 750=4.75%, 1000=0.02% 00:15:29.210 lat (msec) : 2=0.16%, 50=0.59% 00:15:29.210 cpu : usr=1.31%, sys=3.10%, ctx=5052, majf=0, minf=1 00:15:29.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 issued rwts: total=5048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.210 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=15966: Fri Jul 12 15:51:58 2024 00:15:29.210 read: IOPS=2558, BW=9.99MiB/s (10.5MB/s)(37.5MiB/3756msec) 00:15:29.210 slat (usec): min=4, max=12631, avg=14.28, stdev=183.63 00:15:29.210 clat (usec): min=304, max=1953, avg=372.72, stdev=38.06 00:15:29.210 lat (usec): min=310, max=13046, avg=387.01, stdev=188.98 00:15:29.210 clat percentiles (usec): 00:15:29.210 | 1.00th=[ 318], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 351], 00:15:29.210 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 375], 00:15:29.210 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 424], 00:15:29.210 | 99.00th=[ 494], 99.50th=[ 515], 99.90th=[ 578], 99.95th=[ 857], 00:15:29.210 | 99.99th=[ 1958] 00:15:29.210 bw ( KiB/s): min= 9510, max=11424, per=57.80%, avg=10309.43, stdev=607.77, samples=7 00:15:29.210 iops : min= 2377, max= 2856, avg=2577.29, stdev=152.05, samples=7 00:15:29.210 lat (usec) : 500=99.17%, 750=0.76%, 1000=0.03% 00:15:29.210 lat (msec) : 2=0.03% 00:15:29.210 cpu : usr=1.60%, sys=4.23%, ctx=9614, majf=0, minf=1 00:15:29.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 issued rwts: total=9608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.210 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=15967: Fri Jul 12 15:51:58 2024 00:15:29.210 read: IOPS=623, BW=2493KiB/s (2553kB/s)(8092KiB/3246msec) 00:15:29.210 slat (usec): min=5, max=12881, avg=22.04, stdev=286.04 00:15:29.210 clat (usec): min=273, max=42147, avg=1567.38, stdev=6615.12 00:15:29.210 lat (usec): min=278, max=42162, avg=1589.42, stdev=6620.34 00:15:29.210 clat percentiles (usec): 00:15:29.210 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 343], 20.00th=[ 392], 00:15:29.210 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 482], 00:15:29.210 | 70.00th=[ 498], 80.00th=[ 529], 90.00th=[ 611], 95.00th=[ 627], 00:15:29.210 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:29.210 | 99.99th=[42206] 00:15:29.210 bw ( KiB/s): min= 96, max= 5464, per=12.13%, avg=2164.00, stdev=2400.43, samples=6 00:15:29.210 iops : min= 24, max= 1366, avg=541.00, stdev=600.11, samples=6 00:15:29.210 lat (usec) : 500=70.16%, 750=27.08% 00:15:29.210 lat (msec) : 50=2.72% 00:15:29.210 cpu : usr=0.40%, sys=1.11%, ctx=2025, majf=0, minf=1 00:15:29.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.210 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=15968: Fri Jul 12 15:51:58 2024 00:15:29.210 read: IOPS=24, BW=98.1KiB/s (100kB/s)(288KiB/2935msec) 00:15:29.210 slat (nsec): min=13256, max=50217, avg=17544.66, stdev=6283.23 00:15:29.210 clat (usec): min=541, max=41131, avg=40419.87, stdev=4766.06 00:15:29.210 lat (usec): min=591, max=41146, avg=40437.38, stdev=4762.14 00:15:29.210 clat percentiles (usec): 00:15:29.210 | 1.00th=[ 545], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:29.210 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.210 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:29.210 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:29.210 | 99.99th=[41157] 00:15:29.210 bw ( KiB/s): min= 96, max= 104, per=0.56%, avg=99.20, stdev= 4.38, samples=5 00:15:29.210 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:15:29.210 lat (usec) : 750=1.37% 00:15:29.210 lat (msec) : 50=97.26% 00:15:29.210 cpu : usr=0.07%, sys=0.00%, ctx=73, majf=0, minf=1 00:15:29.210 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.210 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.210 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.210 00:15:29.210 Run status group 0 (all jobs): 00:15:29.210 READ: bw=17.4MiB/s (18.3MB/s), 98.1KiB/s-9.99MiB/s (100kB/s-10.5MB/s), io=65.4MiB (68.6MB), run=2935-3756msec 00:15:29.210 00:15:29.210 Disk stats (read/write): 00:15:29.210 nvme0n1: ios=5042/0, merge=0/0, ticks=2984/0, in_queue=2984, util=94.88% 00:15:29.210 nvme0n2: ios=9301/0, merge=0/0, ticks=4393/0, in_queue=4393, util=98.47% 00:15:29.210 nvme0n3: ios=1828/0, merge=0/0, ticks=3032/0, in_queue=3032, util=96.39% 00:15:29.210 nvme0n4: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.71% 00:15:29.467 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.467 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:29.724 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.724 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:29.981 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.981 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:30.269 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:30.269 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:30.526 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:30.526 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 15871 00:15:30.526 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:30.526 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:30.783 nvmf hotplug test: fio failed as expected 00:15:30.783 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.041 rmmod nvme_tcp 00:15:31.041 rmmod nvme_fabrics 00:15:31.041 rmmod nvme_keyring 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 13848 ']' 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 13848 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 13848 ']' 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 13848 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 13848 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 13848' 00:15:31.041 killing process with pid 13848 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 13848 00:15:31.041 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 13848 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.298 15:52:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.835 15:52:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.835 00:15:33.835 real 0m23.425s 00:15:33.835 user 1m21.644s 00:15:33.835 sys 0m6.838s 00:15:33.835 15:52:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.835 15:52:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.835 ************************************ 00:15:33.835 END TEST nvmf_fio_target 00:15:33.835 ************************************ 00:15:33.835 15:52:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:33.835 15:52:02 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:33.835 15:52:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:33.835 15:52:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.835 15:52:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:33.835 ************************************ 00:15:33.835 START TEST nvmf_bdevio 00:15:33.835 ************************************ 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:33.835 * Looking for test storage... 00:15:33.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.835 15:52:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.836 15:52:03 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:35.735 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:35.735 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.735 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:35.736 Found net devices under 0000:09:00.0: cvl_0_0 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:35.736 Found net devices under 0000:09:00.1: cvl_0_1 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:15:35.736 00:15:35.736 --- 10.0.0.2 ping statistics --- 00:15:35.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.736 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:15:35.736 00:15:35.736 --- 10.0.0.1 ping statistics --- 00:15:35.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.736 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=18588 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 18588 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 18588 ']' 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.736 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.736 [2024-07-12 15:52:05.274448] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:15:35.736 [2024-07-12 15:52:05.274535] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.736 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.736 [2024-07-12 15:52:05.338065] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.736 [2024-07-12 15:52:05.450459] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.736 [2024-07-12 15:52:05.450511] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.736 [2024-07-12 15:52:05.450524] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.736 [2024-07-12 15:52:05.450535] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.736 [2024-07-12 15:52:05.450545] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.736 [2024-07-12 15:52:05.450637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:35.736 [2024-07-12 15:52:05.450679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:35.736 [2024-07-12 15:52:05.450739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:35.736 [2024-07-12 15:52:05.450742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.994 [2024-07-12 15:52:05.611208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.994 Malloc0 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:35.994 [2024-07-12 15:52:05.662496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:35.994 { 00:15:35.994 "params": { 00:15:35.994 "name": "Nvme$subsystem", 00:15:35.994 "trtype": "$TEST_TRANSPORT", 00:15:35.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:35.994 "adrfam": "ipv4", 00:15:35.994 "trsvcid": "$NVMF_PORT", 00:15:35.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:35.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:35.994 "hdgst": ${hdgst:-false}, 00:15:35.994 "ddgst": ${ddgst:-false} 00:15:35.994 }, 00:15:35.994 "method": "bdev_nvme_attach_controller" 00:15:35.994 } 00:15:35.994 EOF 00:15:35.994 )") 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:35.994 15:52:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:35.994 "params": { 00:15:35.994 "name": "Nvme1", 00:15:35.994 "trtype": "tcp", 00:15:35.994 "traddr": "10.0.0.2", 00:15:35.994 "adrfam": "ipv4", 00:15:35.994 "trsvcid": "4420", 00:15:35.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:35.994 "hdgst": false, 00:15:35.994 "ddgst": false 00:15:35.994 }, 00:15:35.994 "method": "bdev_nvme_attach_controller" 00:15:35.994 }' 00:15:35.994 [2024-07-12 15:52:05.708814] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:15:35.994 [2024-07-12 15:52:05.708887] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid18616 ] 00:15:36.252 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.252 [2024-07-12 15:52:05.771414] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:36.252 [2024-07-12 15:52:05.888713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.252 [2024-07-12 15:52:05.888770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.252 [2024-07-12 15:52:05.888774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.510 I/O targets: 00:15:36.510 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:36.510 00:15:36.510 00:15:36.510 CUnit - A unit testing framework for C - Version 2.1-3 00:15:36.510 http://cunit.sourceforge.net/ 00:15:36.510 00:15:36.510 00:15:36.510 Suite: bdevio tests on: Nvme1n1 00:15:36.510 Test: blockdev write read block ...passed 00:15:36.510 Test: blockdev write zeroes read block ...passed 00:15:36.510 Test: blockdev write zeroes read no split ...passed 00:15:36.767 Test: blockdev write zeroes read split ...passed 00:15:36.767 Test: blockdev write zeroes read split partial ...passed 00:15:36.767 Test: blockdev reset ...[2024-07-12 15:52:06.312192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:36.767 [2024-07-12 15:52:06.312293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11296d0 (9): Bad file descriptor 00:15:36.767 [2024-07-12 15:52:06.364670] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:36.767 passed 00:15:36.767 Test: blockdev write read 8 blocks ...passed 00:15:36.767 Test: blockdev write read size > 128k ...passed 00:15:36.767 Test: blockdev write read invalid size ...passed 00:15:36.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:36.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:36.767 Test: blockdev write read max offset ...passed 00:15:37.024 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:37.024 Test: blockdev writev readv 8 blocks ...passed 00:15:37.024 Test: blockdev writev readv 30 x 1block ...passed 00:15:37.024 Test: blockdev writev readv block ...passed 00:15:37.024 Test: blockdev writev readv size > 128k ...passed 00:15:37.024 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:37.024 Test: blockdev comparev and writev ...[2024-07-12 15:52:06.661406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.661442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.661473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.661490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.661902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.661927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.661949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.661965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.662356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.662382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.662404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.662420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.662781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.662806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.662828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:37.024 [2024-07-12 15:52:06.662844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:37.024 passed 00:15:37.024 Test: blockdev nvme passthru rw ...passed 00:15:37.024 Test: blockdev nvme passthru vendor specific ...[2024-07-12 15:52:06.744659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.024 [2024-07-12 15:52:06.744687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.744869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.024 [2024-07-12 15:52:06.744892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.745069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.024 [2024-07-12 15:52:06.745091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:37.024 [2024-07-12 15:52:06.745267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:37.024 [2024-07-12 15:52:06.745289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:37.024 passed 00:15:37.281 Test: blockdev nvme admin passthru ...passed 00:15:37.281 Test: blockdev copy ...passed 00:15:37.281 00:15:37.281 Run Summary: Type Total Ran Passed Failed Inactive 00:15:37.281 suites 1 1 n/a 0 0 00:15:37.282 tests 23 23 23 0 0 00:15:37.282 asserts 152 152 152 0 n/a 00:15:37.282 00:15:37.282 Elapsed time = 1.395 seconds 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:37.538 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.539 rmmod nvme_tcp 00:15:37.539 rmmod nvme_fabrics 00:15:37.539 rmmod nvme_keyring 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 18588 ']' 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 18588 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 18588 ']' 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 18588 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 18588 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 18588' 00:15:37.539 killing process with pid 18588 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 18588 00:15:37.539 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 18588 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.796 15:52:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.329 15:52:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.329 00:15:40.329 real 0m6.459s 00:15:40.329 user 0m10.967s 00:15:40.329 sys 0m2.053s 00:15:40.329 15:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.329 15:52:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:40.329 ************************************ 00:15:40.329 END TEST nvmf_bdevio 00:15:40.329 ************************************ 00:15:40.329 15:52:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:40.329 15:52:09 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:40.329 15:52:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.329 15:52:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.329 15:52:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.329 ************************************ 00:15:40.329 START TEST nvmf_auth_target 00:15:40.329 ************************************ 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:40.329 * Looking for test storage... 00:15:40.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.329 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.330 15:52:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:42.229 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:42.229 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:42.229 Found net devices under 0000:09:00.0: cvl_0_0 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:42.229 Found net devices under 0000:09:00.1: cvl_0_1 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:42.229 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:42.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:15:42.230 00:15:42.230 --- 10.0.0.2 ping statistics --- 00:15:42.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.230 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:15:42.230 00:15:42.230 --- 10.0.0.1 ping statistics --- 00:15:42.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.230 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:42.230 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=20805 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 20805 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 20805 ']' 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.487 15:52:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.744 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.744 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:42.744 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.744 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.744 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.744 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.744 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=20835 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7dc70b643ae52289b8573f4f08b2021cd547670d7699806f 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.IUW 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7dc70b643ae52289b8573f4f08b2021cd547670d7699806f 0 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7dc70b643ae52289b8573f4f08b2021cd547670d7699806f 0 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7dc70b643ae52289b8573f4f08b2021cd547670d7699806f 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.IUW 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.IUW 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.IUW 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff19373f56912900e437c5f941bb6984e9c978d16661635e81dbb1899d97efa0 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.POG 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff19373f56912900e437c5f941bb6984e9c978d16661635e81dbb1899d97efa0 3 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff19373f56912900e437c5f941bb6984e9c978d16661635e81dbb1899d97efa0 3 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff19373f56912900e437c5f941bb6984e9c978d16661635e81dbb1899d97efa0 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.POG 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.POG 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.POG 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e67a9c0079f85d7719b716b3af89cdc2 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7uO 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e67a9c0079f85d7719b716b3af89cdc2 1 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e67a9c0079f85d7719b716b3af89cdc2 1 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e67a9c0079f85d7719b716b3af89cdc2 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:42.745 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7uO 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7uO 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.7uO 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2fbe5a75c02a0d031c5f9679b00a11407884ea952eb49ad4 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.eHD 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2fbe5a75c02a0d031c5f9679b00a11407884ea952eb49ad4 2 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2fbe5a75c02a0d031c5f9679b00a11407884ea952eb49ad4 2 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2fbe5a75c02a0d031c5f9679b00a11407884ea952eb49ad4 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.eHD 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.eHD 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.eHD 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=28ea3dcf2ca1aceeccb8c4ee169774f62a058e3e7bd36db6 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.pLf 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 28ea3dcf2ca1aceeccb8c4ee169774f62a058e3e7bd36db6 2 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 28ea3dcf2ca1aceeccb8c4ee169774f62a058e3e7bd36db6 2 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=28ea3dcf2ca1aceeccb8c4ee169774f62a058e3e7bd36db6 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.pLf 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.pLf 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.pLf 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=449d985a08ff51610bc2bd5c317cf991 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ULk 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 449d985a08ff51610bc2bd5c317cf991 1 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 449d985a08ff51610bc2bd5c317cf991 1 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=449d985a08ff51610bc2bd5c317cf991 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ULk 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ULk 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.ULk 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e02dbfb41abba97e9e10d63fd22b811ae4561ee75263234c05d0922b808f1762 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8nx 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e02dbfb41abba97e9e10d63fd22b811ae4561ee75263234c05d0922b808f1762 3 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e02dbfb41abba97e9e10d63fd22b811ae4561ee75263234c05d0922b808f1762 3 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e02dbfb41abba97e9e10d63fd22b811ae4561ee75263234c05d0922b808f1762 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8nx 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8nx 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.8nx 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 20805 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 20805 ']' 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.002 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 20835 /var/tmp/host.sock 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 20835 ']' 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:43.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:43.258 15:52:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.IUW 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.IUW 00:15:43.523 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.IUW 00:15:43.781 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.POG ]] 00:15:43.781 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.POG 00:15:43.781 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.781 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.781 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.781 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.POG 00:15:43.781 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.POG 00:15:44.037 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:44.037 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7uO 00:15:44.037 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.037 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.037 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.037 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7uO 00:15:44.037 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7uO 00:15:44.294 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.eHD ]] 00:15:44.294 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eHD 00:15:44.294 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.294 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.294 15:52:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.294 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eHD 00:15:44.294 15:52:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.eHD 00:15:44.549 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:44.549 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.pLf 00:15:44.549 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.549 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.549 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.549 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.pLf 00:15:44.549 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.pLf 00:15:44.805 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.ULk ]] 00:15:44.805 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ULk 00:15:44.805 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.805 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.805 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.805 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ULk 00:15:44.805 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ULk 00:15:45.061 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:45.061 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8nx 00:15:45.061 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.061 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.061 15:52:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.061 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8nx 00:15:45.061 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8nx 00:15:45.318 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:45.318 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:45.318 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.318 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.318 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.318 15:52:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.577 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.834 00:15:45.834 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.834 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.834 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.091 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.091 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.091 15:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.091 15:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.091 15:52:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.091 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.091 { 00:15:46.091 "cntlid": 1, 00:15:46.091 "qid": 0, 00:15:46.091 "state": "enabled", 00:15:46.091 "thread": "nvmf_tgt_poll_group_000", 00:15:46.091 "listen_address": { 00:15:46.091 "trtype": "TCP", 00:15:46.091 "adrfam": "IPv4", 00:15:46.091 "traddr": "10.0.0.2", 00:15:46.091 "trsvcid": "4420" 00:15:46.091 }, 00:15:46.091 "peer_address": { 00:15:46.091 "trtype": "TCP", 00:15:46.091 "adrfam": "IPv4", 00:15:46.091 "traddr": "10.0.0.1", 00:15:46.091 "trsvcid": "37120" 00:15:46.091 }, 00:15:46.091 "auth": { 00:15:46.091 "state": "completed", 00:15:46.091 "digest": "sha256", 00:15:46.091 "dhgroup": "null" 00:15:46.091 } 00:15:46.091 } 00:15:46.091 ]' 00:15:46.091 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.348 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.348 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.348 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:46.348 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.348 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.348 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.348 15:52:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.605 15:52:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.533 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.790 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.047 00:15:48.047 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.047 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.047 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.305 { 00:15:48.305 "cntlid": 3, 00:15:48.305 "qid": 0, 00:15:48.305 "state": "enabled", 00:15:48.305 "thread": "nvmf_tgt_poll_group_000", 00:15:48.305 "listen_address": { 00:15:48.305 "trtype": "TCP", 00:15:48.305 "adrfam": "IPv4", 00:15:48.305 "traddr": "10.0.0.2", 00:15:48.305 "trsvcid": "4420" 00:15:48.305 }, 00:15:48.305 "peer_address": { 00:15:48.305 "trtype": "TCP", 00:15:48.305 "adrfam": "IPv4", 00:15:48.305 "traddr": "10.0.0.1", 00:15:48.305 "trsvcid": "47922" 00:15:48.305 }, 00:15:48.305 "auth": { 00:15:48.305 "state": "completed", 00:15:48.305 "digest": "sha256", 00:15:48.305 "dhgroup": "null" 00:15:48.305 } 00:15:48.305 } 00:15:48.305 ]' 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.305 15:52:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.562 15:52:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.493 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.749 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.750 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.007 00:15:50.007 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.007 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.007 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.264 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.264 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.264 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.264 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.264 15:52:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.264 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.264 { 00:15:50.264 "cntlid": 5, 00:15:50.264 "qid": 0, 00:15:50.264 "state": "enabled", 00:15:50.264 "thread": "nvmf_tgt_poll_group_000", 00:15:50.264 "listen_address": { 00:15:50.264 "trtype": "TCP", 00:15:50.264 "adrfam": "IPv4", 00:15:50.264 "traddr": "10.0.0.2", 00:15:50.264 "trsvcid": "4420" 00:15:50.264 }, 00:15:50.264 "peer_address": { 00:15:50.264 "trtype": "TCP", 00:15:50.264 "adrfam": "IPv4", 00:15:50.264 "traddr": "10.0.0.1", 00:15:50.264 "trsvcid": "47958" 00:15:50.264 }, 00:15:50.264 "auth": { 00:15:50.264 "state": "completed", 00:15:50.264 "digest": "sha256", 00:15:50.264 "dhgroup": "null" 00:15:50.264 } 00:15:50.264 } 00:15:50.264 ]' 00:15:50.264 15:52:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.522 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.522 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.522 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:50.522 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.522 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.522 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.522 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.779 15:52:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:15:51.711 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.711 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.712 15:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.712 15:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.712 15:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.712 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.712 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.712 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.969 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.226 00:15:52.226 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.226 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.226 15:52:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.484 { 00:15:52.484 "cntlid": 7, 00:15:52.484 "qid": 0, 00:15:52.484 "state": "enabled", 00:15:52.484 "thread": "nvmf_tgt_poll_group_000", 00:15:52.484 "listen_address": { 00:15:52.484 "trtype": "TCP", 00:15:52.484 "adrfam": "IPv4", 00:15:52.484 "traddr": "10.0.0.2", 00:15:52.484 "trsvcid": "4420" 00:15:52.484 }, 00:15:52.484 "peer_address": { 00:15:52.484 "trtype": "TCP", 00:15:52.484 "adrfam": "IPv4", 00:15:52.484 "traddr": "10.0.0.1", 00:15:52.484 "trsvcid": "47990" 00:15:52.484 }, 00:15:52.484 "auth": { 00:15:52.484 "state": "completed", 00:15:52.484 "digest": "sha256", 00:15:52.484 "dhgroup": "null" 00:15:52.484 } 00:15:52.484 } 00:15:52.484 ]' 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.484 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.741 15:52:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.674 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.931 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.189 00:15:54.189 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.189 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.189 15:52:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.447 { 00:15:54.447 "cntlid": 9, 00:15:54.447 "qid": 0, 00:15:54.447 "state": "enabled", 00:15:54.447 "thread": "nvmf_tgt_poll_group_000", 00:15:54.447 "listen_address": { 00:15:54.447 "trtype": "TCP", 00:15:54.447 "adrfam": "IPv4", 00:15:54.447 "traddr": "10.0.0.2", 00:15:54.447 "trsvcid": "4420" 00:15:54.447 }, 00:15:54.447 "peer_address": { 00:15:54.447 "trtype": "TCP", 00:15:54.447 "adrfam": "IPv4", 00:15:54.447 "traddr": "10.0.0.1", 00:15:54.447 "trsvcid": "48012" 00:15:54.447 }, 00:15:54.447 "auth": { 00:15:54.447 "state": "completed", 00:15:54.447 "digest": "sha256", 00:15:54.447 "dhgroup": "ffdhe2048" 00:15:54.447 } 00:15:54.447 } 00:15:54.447 ]' 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.704 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.704 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.704 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.704 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.704 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.961 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.891 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.456 00:15:56.456 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.456 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.456 15:52:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.456 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.456 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.456 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.456 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.456 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.456 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.456 { 00:15:56.456 "cntlid": 11, 00:15:56.456 "qid": 0, 00:15:56.457 "state": "enabled", 00:15:56.457 "thread": "nvmf_tgt_poll_group_000", 00:15:56.457 "listen_address": { 00:15:56.457 "trtype": "TCP", 00:15:56.457 "adrfam": "IPv4", 00:15:56.457 "traddr": "10.0.0.2", 00:15:56.457 "trsvcid": "4420" 00:15:56.457 }, 00:15:56.457 "peer_address": { 00:15:56.457 "trtype": "TCP", 00:15:56.457 "adrfam": "IPv4", 00:15:56.457 "traddr": "10.0.0.1", 00:15:56.457 "trsvcid": "48028" 00:15:56.457 }, 00:15:56.457 "auth": { 00:15:56.457 "state": "completed", 00:15:56.457 "digest": "sha256", 00:15:56.457 "dhgroup": "ffdhe2048" 00:15:56.457 } 00:15:56.457 } 00:15:56.457 ]' 00:15:56.457 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.713 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.713 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.713 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:56.713 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.713 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.713 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.713 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.969 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:57.899 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.156 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.413 00:15:58.413 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.413 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.413 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.700 { 00:15:58.700 "cntlid": 13, 00:15:58.700 "qid": 0, 00:15:58.700 "state": "enabled", 00:15:58.700 "thread": "nvmf_tgt_poll_group_000", 00:15:58.700 "listen_address": { 00:15:58.700 "trtype": "TCP", 00:15:58.700 "adrfam": "IPv4", 00:15:58.700 "traddr": "10.0.0.2", 00:15:58.700 "trsvcid": "4420" 00:15:58.700 }, 00:15:58.700 "peer_address": { 00:15:58.700 "trtype": "TCP", 00:15:58.700 "adrfam": "IPv4", 00:15:58.700 "traddr": "10.0.0.1", 00:15:58.700 "trsvcid": "53606" 00:15:58.700 }, 00:15:58.700 "auth": { 00:15:58.700 "state": "completed", 00:15:58.700 "digest": "sha256", 00:15:58.700 "dhgroup": "ffdhe2048" 00:15:58.700 } 00:15:58.700 } 00:15:58.700 ]' 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.700 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.957 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.957 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.957 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.957 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.957 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.215 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:00.147 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.404 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.662 00:16:00.662 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.662 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.662 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.919 { 00:16:00.919 "cntlid": 15, 00:16:00.919 "qid": 0, 00:16:00.919 "state": "enabled", 00:16:00.919 "thread": "nvmf_tgt_poll_group_000", 00:16:00.919 "listen_address": { 00:16:00.919 "trtype": "TCP", 00:16:00.919 "adrfam": "IPv4", 00:16:00.919 "traddr": "10.0.0.2", 00:16:00.919 "trsvcid": "4420" 00:16:00.919 }, 00:16:00.919 "peer_address": { 00:16:00.919 "trtype": "TCP", 00:16:00.919 "adrfam": "IPv4", 00:16:00.919 "traddr": "10.0.0.1", 00:16:00.919 "trsvcid": "53636" 00:16:00.919 }, 00:16:00.919 "auth": { 00:16:00.919 "state": "completed", 00:16:00.919 "digest": "sha256", 00:16:00.919 "dhgroup": "ffdhe2048" 00:16:00.919 } 00:16:00.919 } 00:16:00.919 ]' 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.919 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.177 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.107 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.364 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.620 00:16:02.620 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.620 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.620 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.877 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.877 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.877 15:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.877 15:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.877 15:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.877 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.877 { 00:16:02.877 "cntlid": 17, 00:16:02.878 "qid": 0, 00:16:02.878 "state": "enabled", 00:16:02.878 "thread": "nvmf_tgt_poll_group_000", 00:16:02.878 "listen_address": { 00:16:02.878 "trtype": "TCP", 00:16:02.878 "adrfam": "IPv4", 00:16:02.878 "traddr": "10.0.0.2", 00:16:02.878 "trsvcid": "4420" 00:16:02.878 }, 00:16:02.878 "peer_address": { 00:16:02.878 "trtype": "TCP", 00:16:02.878 "adrfam": "IPv4", 00:16:02.878 "traddr": "10.0.0.1", 00:16:02.878 "trsvcid": "53674" 00:16:02.878 }, 00:16:02.878 "auth": { 00:16:02.878 "state": "completed", 00:16:02.878 "digest": "sha256", 00:16:02.878 "dhgroup": "ffdhe3072" 00:16:02.878 } 00:16:02.878 } 00:16:02.878 ]' 00:16:02.878 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.135 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.135 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.135 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.135 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.135 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.135 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.135 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.392 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.323 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.581 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.839 00:16:04.839 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.839 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.839 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.097 { 00:16:05.097 "cntlid": 19, 00:16:05.097 "qid": 0, 00:16:05.097 "state": "enabled", 00:16:05.097 "thread": "nvmf_tgt_poll_group_000", 00:16:05.097 "listen_address": { 00:16:05.097 "trtype": "TCP", 00:16:05.097 "adrfam": "IPv4", 00:16:05.097 "traddr": "10.0.0.2", 00:16:05.097 "trsvcid": "4420" 00:16:05.097 }, 00:16:05.097 "peer_address": { 00:16:05.097 "trtype": "TCP", 00:16:05.097 "adrfam": "IPv4", 00:16:05.097 "traddr": "10.0.0.1", 00:16:05.097 "trsvcid": "53690" 00:16:05.097 }, 00:16:05.097 "auth": { 00:16:05.097 "state": "completed", 00:16:05.097 "digest": "sha256", 00:16:05.097 "dhgroup": "ffdhe3072" 00:16:05.097 } 00:16:05.097 } 00:16:05.097 ]' 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.097 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.355 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:06.285 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.544 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.109 00:16:07.109 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.109 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.109 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.366 { 00:16:07.366 "cntlid": 21, 00:16:07.366 "qid": 0, 00:16:07.366 "state": "enabled", 00:16:07.366 "thread": "nvmf_tgt_poll_group_000", 00:16:07.366 "listen_address": { 00:16:07.366 "trtype": "TCP", 00:16:07.366 "adrfam": "IPv4", 00:16:07.366 "traddr": "10.0.0.2", 00:16:07.366 "trsvcid": "4420" 00:16:07.366 }, 00:16:07.366 "peer_address": { 00:16:07.366 "trtype": "TCP", 00:16:07.366 "adrfam": "IPv4", 00:16:07.366 "traddr": "10.0.0.1", 00:16:07.366 "trsvcid": "60338" 00:16:07.366 }, 00:16:07.366 "auth": { 00:16:07.366 "state": "completed", 00:16:07.366 "digest": "sha256", 00:16:07.366 "dhgroup": "ffdhe3072" 00:16:07.366 } 00:16:07.366 } 00:16:07.366 ]' 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.366 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.366 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.366 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.366 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.366 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.623 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:08.554 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:08.811 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:08.811 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.811 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.811 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.811 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.812 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.812 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:08.812 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.812 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.812 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.812 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.812 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.376 00:16:09.376 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.376 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.376 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.633 { 00:16:09.633 "cntlid": 23, 00:16:09.633 "qid": 0, 00:16:09.633 "state": "enabled", 00:16:09.633 "thread": "nvmf_tgt_poll_group_000", 00:16:09.633 "listen_address": { 00:16:09.633 "trtype": "TCP", 00:16:09.633 "adrfam": "IPv4", 00:16:09.633 "traddr": "10.0.0.2", 00:16:09.633 "trsvcid": "4420" 00:16:09.633 }, 00:16:09.633 "peer_address": { 00:16:09.633 "trtype": "TCP", 00:16:09.633 "adrfam": "IPv4", 00:16:09.633 "traddr": "10.0.0.1", 00:16:09.633 "trsvcid": "60362" 00:16:09.633 }, 00:16:09.633 "auth": { 00:16:09.633 "state": "completed", 00:16:09.633 "digest": "sha256", 00:16:09.633 "dhgroup": "ffdhe3072" 00:16:09.633 } 00:16:09.633 } 00:16:09.633 ]' 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.633 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.890 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.821 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.079 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.644 00:16:11.644 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.644 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.644 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.901 { 00:16:11.901 "cntlid": 25, 00:16:11.901 "qid": 0, 00:16:11.901 "state": "enabled", 00:16:11.901 "thread": "nvmf_tgt_poll_group_000", 00:16:11.901 "listen_address": { 00:16:11.901 "trtype": "TCP", 00:16:11.901 "adrfam": "IPv4", 00:16:11.901 "traddr": "10.0.0.2", 00:16:11.901 "trsvcid": "4420" 00:16:11.901 }, 00:16:11.901 "peer_address": { 00:16:11.901 "trtype": "TCP", 00:16:11.901 "adrfam": "IPv4", 00:16:11.901 "traddr": "10.0.0.1", 00:16:11.901 "trsvcid": "60398" 00:16:11.901 }, 00:16:11.901 "auth": { 00:16:11.901 "state": "completed", 00:16:11.901 "digest": "sha256", 00:16:11.901 "dhgroup": "ffdhe4096" 00:16:11.901 } 00:16:11.901 } 00:16:11.901 ]' 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.901 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.158 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.122 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.380 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.943 00:16:13.943 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.943 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.943 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.200 { 00:16:14.200 "cntlid": 27, 00:16:14.200 "qid": 0, 00:16:14.200 "state": "enabled", 00:16:14.200 "thread": "nvmf_tgt_poll_group_000", 00:16:14.200 "listen_address": { 00:16:14.200 "trtype": "TCP", 00:16:14.200 "adrfam": "IPv4", 00:16:14.200 "traddr": "10.0.0.2", 00:16:14.200 "trsvcid": "4420" 00:16:14.200 }, 00:16:14.200 "peer_address": { 00:16:14.200 "trtype": "TCP", 00:16:14.200 "adrfam": "IPv4", 00:16:14.200 "traddr": "10.0.0.1", 00:16:14.200 "trsvcid": "60426" 00:16:14.200 }, 00:16:14.200 "auth": { 00:16:14.200 "state": "completed", 00:16:14.200 "digest": "sha256", 00:16:14.200 "dhgroup": "ffdhe4096" 00:16:14.200 } 00:16:14.200 } 00:16:14.200 ]' 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.200 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.457 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:16:15.387 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.387 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:15.387 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.387 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.387 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.387 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.387 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.387 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:15.644 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:15.644 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.644 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.644 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.645 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.209 00:16:16.209 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.209 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.209 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.467 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.467 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.467 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.467 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.467 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.467 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.467 { 00:16:16.467 "cntlid": 29, 00:16:16.467 "qid": 0, 00:16:16.467 "state": "enabled", 00:16:16.467 "thread": "nvmf_tgt_poll_group_000", 00:16:16.467 "listen_address": { 00:16:16.467 "trtype": "TCP", 00:16:16.467 "adrfam": "IPv4", 00:16:16.467 "traddr": "10.0.0.2", 00:16:16.467 "trsvcid": "4420" 00:16:16.467 }, 00:16:16.467 "peer_address": { 00:16:16.467 "trtype": "TCP", 00:16:16.467 "adrfam": "IPv4", 00:16:16.467 "traddr": "10.0.0.1", 00:16:16.467 "trsvcid": "60446" 00:16:16.467 }, 00:16:16.467 "auth": { 00:16:16.467 "state": "completed", 00:16:16.467 "digest": "sha256", 00:16:16.467 "dhgroup": "ffdhe4096" 00:16:16.467 } 00:16:16.467 } 00:16:16.467 ]' 00:16:16.467 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.467 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.467 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.467 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.467 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.467 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.467 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.467 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.724 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.655 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.912 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.476 00:16:18.477 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.477 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.477 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.734 { 00:16:18.734 "cntlid": 31, 00:16:18.734 "qid": 0, 00:16:18.734 "state": "enabled", 00:16:18.734 "thread": "nvmf_tgt_poll_group_000", 00:16:18.734 "listen_address": { 00:16:18.734 "trtype": "TCP", 00:16:18.734 "adrfam": "IPv4", 00:16:18.734 "traddr": "10.0.0.2", 00:16:18.734 "trsvcid": "4420" 00:16:18.734 }, 00:16:18.734 "peer_address": { 00:16:18.734 "trtype": "TCP", 00:16:18.734 "adrfam": "IPv4", 00:16:18.734 "traddr": "10.0.0.1", 00:16:18.734 "trsvcid": "48100" 00:16:18.734 }, 00:16:18.734 "auth": { 00:16:18.734 "state": "completed", 00:16:18.734 "digest": "sha256", 00:16:18.734 "dhgroup": "ffdhe4096" 00:16:18.734 } 00:16:18.734 } 00:16:18.734 ]' 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.734 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.991 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.919 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.177 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.764 00:16:20.764 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.764 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.764 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.022 { 00:16:21.022 "cntlid": 33, 00:16:21.022 "qid": 0, 00:16:21.022 "state": "enabled", 00:16:21.022 "thread": "nvmf_tgt_poll_group_000", 00:16:21.022 "listen_address": { 00:16:21.022 "trtype": "TCP", 00:16:21.022 "adrfam": "IPv4", 00:16:21.022 "traddr": "10.0.0.2", 00:16:21.022 "trsvcid": "4420" 00:16:21.022 }, 00:16:21.022 "peer_address": { 00:16:21.022 "trtype": "TCP", 00:16:21.022 "adrfam": "IPv4", 00:16:21.022 "traddr": "10.0.0.1", 00:16:21.022 "trsvcid": "48130" 00:16:21.022 }, 00:16:21.022 "auth": { 00:16:21.022 "state": "completed", 00:16:21.022 "digest": "sha256", 00:16:21.022 "dhgroup": "ffdhe6144" 00:16:21.022 } 00:16:21.022 } 00:16:21.022 ]' 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.022 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.279 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.207 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.464 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.025 00:16:23.025 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.025 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.025 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.281 { 00:16:23.281 "cntlid": 35, 00:16:23.281 "qid": 0, 00:16:23.281 "state": "enabled", 00:16:23.281 "thread": "nvmf_tgt_poll_group_000", 00:16:23.281 "listen_address": { 00:16:23.281 "trtype": "TCP", 00:16:23.281 "adrfam": "IPv4", 00:16:23.281 "traddr": "10.0.0.2", 00:16:23.281 "trsvcid": "4420" 00:16:23.281 }, 00:16:23.281 "peer_address": { 00:16:23.281 "trtype": "TCP", 00:16:23.281 "adrfam": "IPv4", 00:16:23.281 "traddr": "10.0.0.1", 00:16:23.281 "trsvcid": "48162" 00:16:23.281 }, 00:16:23.281 "auth": { 00:16:23.281 "state": "completed", 00:16:23.281 "digest": "sha256", 00:16:23.281 "dhgroup": "ffdhe6144" 00:16:23.281 } 00:16:23.281 } 00:16:23.281 ]' 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.281 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.281 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.281 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.537 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.537 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.537 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.792 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:24.725 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.004 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.568 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.568 { 00:16:25.568 "cntlid": 37, 00:16:25.568 "qid": 0, 00:16:25.568 "state": "enabled", 00:16:25.568 "thread": "nvmf_tgt_poll_group_000", 00:16:25.568 "listen_address": { 00:16:25.568 "trtype": "TCP", 00:16:25.568 "adrfam": "IPv4", 00:16:25.568 "traddr": "10.0.0.2", 00:16:25.568 "trsvcid": "4420" 00:16:25.568 }, 00:16:25.568 "peer_address": { 00:16:25.568 "trtype": "TCP", 00:16:25.568 "adrfam": "IPv4", 00:16:25.568 "traddr": "10.0.0.1", 00:16:25.568 "trsvcid": "48198" 00:16:25.568 }, 00:16:25.568 "auth": { 00:16:25.568 "state": "completed", 00:16:25.568 "digest": "sha256", 00:16:25.568 "dhgroup": "ffdhe6144" 00:16:25.568 } 00:16:25.568 } 00:16:25.568 ]' 00:16:25.568 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.825 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.825 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.825 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.825 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.825 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.825 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.825 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.084 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.050 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.307 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.871 00:16:27.871 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.871 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.871 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.129 { 00:16:28.129 "cntlid": 39, 00:16:28.129 "qid": 0, 00:16:28.129 "state": "enabled", 00:16:28.129 "thread": "nvmf_tgt_poll_group_000", 00:16:28.129 "listen_address": { 00:16:28.129 "trtype": "TCP", 00:16:28.129 "adrfam": "IPv4", 00:16:28.129 "traddr": "10.0.0.2", 00:16:28.129 "trsvcid": "4420" 00:16:28.129 }, 00:16:28.129 "peer_address": { 00:16:28.129 "trtype": "TCP", 00:16:28.129 "adrfam": "IPv4", 00:16:28.129 "traddr": "10.0.0.1", 00:16:28.129 "trsvcid": "39992" 00:16:28.129 }, 00:16:28.129 "auth": { 00:16:28.129 "state": "completed", 00:16:28.129 "digest": "sha256", 00:16:28.129 "dhgroup": "ffdhe6144" 00:16:28.129 } 00:16:28.129 } 00:16:28.129 ]' 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.129 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.386 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.317 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.318 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.574 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.575 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.507 00:16:30.507 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.507 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.507 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.765 { 00:16:30.765 "cntlid": 41, 00:16:30.765 "qid": 0, 00:16:30.765 "state": "enabled", 00:16:30.765 "thread": "nvmf_tgt_poll_group_000", 00:16:30.765 "listen_address": { 00:16:30.765 "trtype": "TCP", 00:16:30.765 "adrfam": "IPv4", 00:16:30.765 "traddr": "10.0.0.2", 00:16:30.765 "trsvcid": "4420" 00:16:30.765 }, 00:16:30.765 "peer_address": { 00:16:30.765 "trtype": "TCP", 00:16:30.765 "adrfam": "IPv4", 00:16:30.765 "traddr": "10.0.0.1", 00:16:30.765 "trsvcid": "40022" 00:16:30.765 }, 00:16:30.765 "auth": { 00:16:30.765 "state": "completed", 00:16:30.765 "digest": "sha256", 00:16:30.765 "dhgroup": "ffdhe8192" 00:16:30.765 } 00:16:30.765 } 00:16:30.765 ]' 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.765 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.031 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:31.966 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.223 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:33.155 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.155 { 00:16:33.155 "cntlid": 43, 00:16:33.155 "qid": 0, 00:16:33.155 "state": "enabled", 00:16:33.155 "thread": "nvmf_tgt_poll_group_000", 00:16:33.155 "listen_address": { 00:16:33.155 "trtype": "TCP", 00:16:33.155 "adrfam": "IPv4", 00:16:33.155 "traddr": "10.0.0.2", 00:16:33.155 "trsvcid": "4420" 00:16:33.155 }, 00:16:33.155 "peer_address": { 00:16:33.155 "trtype": "TCP", 00:16:33.155 "adrfam": "IPv4", 00:16:33.155 "traddr": "10.0.0.1", 00:16:33.155 "trsvcid": "40050" 00:16:33.155 }, 00:16:33.155 "auth": { 00:16:33.155 "state": "completed", 00:16:33.155 "digest": "sha256", 00:16:33.155 "dhgroup": "ffdhe8192" 00:16:33.155 } 00:16:33.155 } 00:16:33.155 ]' 00:16:33.155 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.412 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.412 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.412 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.412 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.412 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.412 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.412 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.669 15:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.599 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.856 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.856 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.856 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:35.420 00:16:35.420 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.420 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.420 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.678 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.678 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.678 15:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.678 15:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.678 15:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.678 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.678 { 00:16:35.678 "cntlid": 45, 00:16:35.678 "qid": 0, 00:16:35.678 "state": "enabled", 00:16:35.678 "thread": "nvmf_tgt_poll_group_000", 00:16:35.678 "listen_address": { 00:16:35.678 "trtype": "TCP", 00:16:35.678 "adrfam": "IPv4", 00:16:35.678 "traddr": "10.0.0.2", 00:16:35.678 "trsvcid": "4420" 00:16:35.678 }, 00:16:35.678 "peer_address": { 00:16:35.678 "trtype": "TCP", 00:16:35.678 "adrfam": "IPv4", 00:16:35.678 "traddr": "10.0.0.1", 00:16:35.678 "trsvcid": "40080" 00:16:35.678 }, 00:16:35.678 "auth": { 00:16:35.678 "state": "completed", 00:16:35.678 "digest": "sha256", 00:16:35.678 "dhgroup": "ffdhe8192" 00:16:35.678 } 00:16:35.678 } 00:16:35.678 ]' 00:16:35.678 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.935 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.935 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.935 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.935 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.935 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.935 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.935 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.192 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.122 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.380 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.327 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.327 { 00:16:38.327 "cntlid": 47, 00:16:38.327 "qid": 0, 00:16:38.327 "state": "enabled", 00:16:38.327 "thread": "nvmf_tgt_poll_group_000", 00:16:38.327 "listen_address": { 00:16:38.327 "trtype": "TCP", 00:16:38.327 "adrfam": "IPv4", 00:16:38.327 "traddr": "10.0.0.2", 00:16:38.327 "trsvcid": "4420" 00:16:38.327 }, 00:16:38.327 "peer_address": { 00:16:38.327 "trtype": "TCP", 00:16:38.327 "adrfam": "IPv4", 00:16:38.327 "traddr": "10.0.0.1", 00:16:38.327 "trsvcid": "46420" 00:16:38.327 }, 00:16:38.327 "auth": { 00:16:38.327 "state": "completed", 00:16:38.327 "digest": "sha256", 00:16:38.327 "dhgroup": "ffdhe8192" 00:16:38.327 } 00:16:38.327 } 00:16:38.327 ]' 00:16:38.327 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.327 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.327 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.584 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.584 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.584 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.584 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.584 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.842 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.774 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.032 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.290 00:16:40.290 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.290 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.290 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.583 { 00:16:40.583 "cntlid": 49, 00:16:40.583 "qid": 0, 00:16:40.583 "state": "enabled", 00:16:40.583 "thread": "nvmf_tgt_poll_group_000", 00:16:40.583 "listen_address": { 00:16:40.583 "trtype": "TCP", 00:16:40.583 "adrfam": "IPv4", 00:16:40.583 "traddr": "10.0.0.2", 00:16:40.583 "trsvcid": "4420" 00:16:40.583 }, 00:16:40.583 "peer_address": { 00:16:40.583 "trtype": "TCP", 00:16:40.583 "adrfam": "IPv4", 00:16:40.583 "traddr": "10.0.0.1", 00:16:40.583 "trsvcid": "46456" 00:16:40.583 }, 00:16:40.583 "auth": { 00:16:40.583 "state": "completed", 00:16:40.583 "digest": "sha384", 00:16:40.583 "dhgroup": "null" 00:16:40.583 } 00:16:40.583 } 00:16:40.583 ]' 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.583 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.841 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:16:41.773 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.031 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.031 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.031 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.031 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.031 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.031 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.031 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.288 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.545 00:16:42.545 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.545 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.545 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.802 { 00:16:42.802 "cntlid": 51, 00:16:42.802 "qid": 0, 00:16:42.802 "state": "enabled", 00:16:42.802 "thread": "nvmf_tgt_poll_group_000", 00:16:42.802 "listen_address": { 00:16:42.802 "trtype": "TCP", 00:16:42.802 "adrfam": "IPv4", 00:16:42.802 "traddr": "10.0.0.2", 00:16:42.802 "trsvcid": "4420" 00:16:42.802 }, 00:16:42.802 "peer_address": { 00:16:42.802 "trtype": "TCP", 00:16:42.802 "adrfam": "IPv4", 00:16:42.802 "traddr": "10.0.0.1", 00:16:42.802 "trsvcid": "46484" 00:16:42.802 }, 00:16:42.802 "auth": { 00:16:42.802 "state": "completed", 00:16:42.802 "digest": "sha384", 00:16:42.802 "dhgroup": "null" 00:16:42.802 } 00:16:42.802 } 00:16:42.802 ]' 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.802 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.060 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:43.989 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.247 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.812 00:16:44.812 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.812 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.812 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.812 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.812 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.812 15:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.812 15:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.069 { 00:16:45.069 "cntlid": 53, 00:16:45.069 "qid": 0, 00:16:45.069 "state": "enabled", 00:16:45.069 "thread": "nvmf_tgt_poll_group_000", 00:16:45.069 "listen_address": { 00:16:45.069 "trtype": "TCP", 00:16:45.069 "adrfam": "IPv4", 00:16:45.069 "traddr": "10.0.0.2", 00:16:45.069 "trsvcid": "4420" 00:16:45.069 }, 00:16:45.069 "peer_address": { 00:16:45.069 "trtype": "TCP", 00:16:45.069 "adrfam": "IPv4", 00:16:45.069 "traddr": "10.0.0.1", 00:16:45.069 "trsvcid": "46506" 00:16:45.069 }, 00:16:45.069 "auth": { 00:16:45.069 "state": "completed", 00:16:45.069 "digest": "sha384", 00:16:45.069 "dhgroup": "null" 00:16:45.069 } 00:16:45.069 } 00:16:45.069 ]' 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.069 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.326 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:16:46.256 15:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.257 15:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:46.257 15:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.257 15:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.257 15:53:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.257 15:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.257 15:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.257 15:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.513 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.769 00:16:46.769 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.769 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.769 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.026 { 00:16:47.026 "cntlid": 55, 00:16:47.026 "qid": 0, 00:16:47.026 "state": "enabled", 00:16:47.026 "thread": "nvmf_tgt_poll_group_000", 00:16:47.026 "listen_address": { 00:16:47.026 "trtype": "TCP", 00:16:47.026 "adrfam": "IPv4", 00:16:47.026 "traddr": "10.0.0.2", 00:16:47.026 "trsvcid": "4420" 00:16:47.026 }, 00:16:47.026 "peer_address": { 00:16:47.026 "trtype": "TCP", 00:16:47.026 "adrfam": "IPv4", 00:16:47.026 "traddr": "10.0.0.1", 00:16:47.026 "trsvcid": "46544" 00:16:47.026 }, 00:16:47.026 "auth": { 00:16:47.026 "state": "completed", 00:16:47.026 "digest": "sha384", 00:16:47.026 "dhgroup": "null" 00:16:47.026 } 00:16:47.026 } 00:16:47.026 ]' 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.026 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.283 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.214 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.472 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.037 00:16:49.037 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.037 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.037 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.294 { 00:16:49.294 "cntlid": 57, 00:16:49.294 "qid": 0, 00:16:49.294 "state": "enabled", 00:16:49.294 "thread": "nvmf_tgt_poll_group_000", 00:16:49.294 "listen_address": { 00:16:49.294 "trtype": "TCP", 00:16:49.294 "adrfam": "IPv4", 00:16:49.294 "traddr": "10.0.0.2", 00:16:49.294 "trsvcid": "4420" 00:16:49.294 }, 00:16:49.294 "peer_address": { 00:16:49.294 "trtype": "TCP", 00:16:49.294 "adrfam": "IPv4", 00:16:49.294 "traddr": "10.0.0.1", 00:16:49.294 "trsvcid": "39002" 00:16:49.294 }, 00:16:49.294 "auth": { 00:16:49.294 "state": "completed", 00:16:49.294 "digest": "sha384", 00:16:49.294 "dhgroup": "ffdhe2048" 00:16:49.294 } 00:16:49.294 } 00:16:49.294 ]' 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.294 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.551 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.483 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.741 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.000 00:16:51.000 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.000 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.000 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.257 { 00:16:51.257 "cntlid": 59, 00:16:51.257 "qid": 0, 00:16:51.257 "state": "enabled", 00:16:51.257 "thread": "nvmf_tgt_poll_group_000", 00:16:51.257 "listen_address": { 00:16:51.257 "trtype": "TCP", 00:16:51.257 "adrfam": "IPv4", 00:16:51.257 "traddr": "10.0.0.2", 00:16:51.257 "trsvcid": "4420" 00:16:51.257 }, 00:16:51.257 "peer_address": { 00:16:51.257 "trtype": "TCP", 00:16:51.257 "adrfam": "IPv4", 00:16:51.257 "traddr": "10.0.0.1", 00:16:51.257 "trsvcid": "39020" 00:16:51.257 }, 00:16:51.257 "auth": { 00:16:51.257 "state": "completed", 00:16:51.257 "digest": "sha384", 00:16:51.257 "dhgroup": "ffdhe2048" 00:16:51.257 } 00:16:51.257 } 00:16:51.257 ]' 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.257 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.514 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.514 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.514 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.770 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:16:52.699 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.700 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.956 00:16:53.214 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.214 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.214 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.470 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.470 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.470 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.470 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.470 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.470 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.470 { 00:16:53.470 "cntlid": 61, 00:16:53.470 "qid": 0, 00:16:53.470 "state": "enabled", 00:16:53.470 "thread": "nvmf_tgt_poll_group_000", 00:16:53.470 "listen_address": { 00:16:53.470 "trtype": "TCP", 00:16:53.470 "adrfam": "IPv4", 00:16:53.470 "traddr": "10.0.0.2", 00:16:53.470 "trsvcid": "4420" 00:16:53.470 }, 00:16:53.470 "peer_address": { 00:16:53.470 "trtype": "TCP", 00:16:53.470 "adrfam": "IPv4", 00:16:53.470 "traddr": "10.0.0.1", 00:16:53.470 "trsvcid": "39064" 00:16:53.470 }, 00:16:53.470 "auth": { 00:16:53.470 "state": "completed", 00:16:53.470 "digest": "sha384", 00:16:53.470 "dhgroup": "ffdhe2048" 00:16:53.470 } 00:16:53.470 } 00:16:53.470 ]' 00:16:53.470 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.470 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.470 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.470 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.470 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.470 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.470 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.470 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.727 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.687 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.945 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.202 00:16:55.202 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.202 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.202 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.460 { 00:16:55.460 "cntlid": 63, 00:16:55.460 "qid": 0, 00:16:55.460 "state": "enabled", 00:16:55.460 "thread": "nvmf_tgt_poll_group_000", 00:16:55.460 "listen_address": { 00:16:55.460 "trtype": "TCP", 00:16:55.460 "adrfam": "IPv4", 00:16:55.460 "traddr": "10.0.0.2", 00:16:55.460 "trsvcid": "4420" 00:16:55.460 }, 00:16:55.460 "peer_address": { 00:16:55.460 "trtype": "TCP", 00:16:55.460 "adrfam": "IPv4", 00:16:55.460 "traddr": "10.0.0.1", 00:16:55.460 "trsvcid": "39082" 00:16:55.460 }, 00:16:55.460 "auth": { 00:16:55.460 "state": "completed", 00:16:55.460 "digest": "sha384", 00:16:55.460 "dhgroup": "ffdhe2048" 00:16:55.460 } 00:16:55.460 } 00:16:55.460 ]' 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.460 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.717 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.717 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.717 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.717 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.717 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.974 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:56.905 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.163 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.420 00:16:57.420 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.420 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.420 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.677 { 00:16:57.677 "cntlid": 65, 00:16:57.677 "qid": 0, 00:16:57.677 "state": "enabled", 00:16:57.677 "thread": "nvmf_tgt_poll_group_000", 00:16:57.677 "listen_address": { 00:16:57.677 "trtype": "TCP", 00:16:57.677 "adrfam": "IPv4", 00:16:57.677 "traddr": "10.0.0.2", 00:16:57.677 "trsvcid": "4420" 00:16:57.677 }, 00:16:57.677 "peer_address": { 00:16:57.677 "trtype": "TCP", 00:16:57.677 "adrfam": "IPv4", 00:16:57.677 "traddr": "10.0.0.1", 00:16:57.677 "trsvcid": "57534" 00:16:57.677 }, 00:16:57.677 "auth": { 00:16:57.677 "state": "completed", 00:16:57.677 "digest": "sha384", 00:16:57.677 "dhgroup": "ffdhe3072" 00:16:57.677 } 00:16:57.677 } 00:16:57.677 ]' 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.677 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.934 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:57.934 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.934 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.934 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.934 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.192 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:16:59.123 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.124 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:59.124 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.124 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.124 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.124 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.124 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.124 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.381 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.639 00:16:59.639 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.639 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.639 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.896 { 00:16:59.896 "cntlid": 67, 00:16:59.896 "qid": 0, 00:16:59.896 "state": "enabled", 00:16:59.896 "thread": "nvmf_tgt_poll_group_000", 00:16:59.896 "listen_address": { 00:16:59.896 "trtype": "TCP", 00:16:59.896 "adrfam": "IPv4", 00:16:59.896 "traddr": "10.0.0.2", 00:16:59.896 "trsvcid": "4420" 00:16:59.896 }, 00:16:59.896 "peer_address": { 00:16:59.896 "trtype": "TCP", 00:16:59.896 "adrfam": "IPv4", 00:16:59.896 "traddr": "10.0.0.1", 00:16:59.896 "trsvcid": "57550" 00:16:59.896 }, 00:16:59.896 "auth": { 00:16:59.896 "state": "completed", 00:16:59.896 "digest": "sha384", 00:16:59.896 "dhgroup": "ffdhe3072" 00:16:59.896 } 00:16:59.896 } 00:16:59.896 ]' 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.896 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.153 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.153 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.153 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.410 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.343 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.343 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.906 00:17:01.906 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.906 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.906 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.163 { 00:17:02.163 "cntlid": 69, 00:17:02.163 "qid": 0, 00:17:02.163 "state": "enabled", 00:17:02.163 "thread": "nvmf_tgt_poll_group_000", 00:17:02.163 "listen_address": { 00:17:02.163 "trtype": "TCP", 00:17:02.163 "adrfam": "IPv4", 00:17:02.163 "traddr": "10.0.0.2", 00:17:02.163 "trsvcid": "4420" 00:17:02.163 }, 00:17:02.163 "peer_address": { 00:17:02.163 "trtype": "TCP", 00:17:02.163 "adrfam": "IPv4", 00:17:02.163 "traddr": "10.0.0.1", 00:17:02.163 "trsvcid": "57578" 00:17:02.163 }, 00:17:02.163 "auth": { 00:17:02.163 "state": "completed", 00:17:02.163 "digest": "sha384", 00:17:02.163 "dhgroup": "ffdhe3072" 00:17:02.163 } 00:17:02.163 } 00:17:02.163 ]' 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.163 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.420 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.352 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.609 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.172 00:17:04.172 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.172 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.172 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.172 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.172 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.172 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.172 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.429 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.429 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.429 { 00:17:04.429 "cntlid": 71, 00:17:04.429 "qid": 0, 00:17:04.429 "state": "enabled", 00:17:04.429 "thread": "nvmf_tgt_poll_group_000", 00:17:04.429 "listen_address": { 00:17:04.429 "trtype": "TCP", 00:17:04.429 "adrfam": "IPv4", 00:17:04.429 "traddr": "10.0.0.2", 00:17:04.429 "trsvcid": "4420" 00:17:04.429 }, 00:17:04.429 "peer_address": { 00:17:04.429 "trtype": "TCP", 00:17:04.429 "adrfam": "IPv4", 00:17:04.429 "traddr": "10.0.0.1", 00:17:04.429 "trsvcid": "57602" 00:17:04.429 }, 00:17:04.429 "auth": { 00:17:04.429 "state": "completed", 00:17:04.429 "digest": "sha384", 00:17:04.429 "dhgroup": "ffdhe3072" 00:17:04.429 } 00:17:04.429 } 00:17:04.429 ]' 00:17:04.429 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.429 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.429 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.429 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.429 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.429 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.429 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.429 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.687 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.618 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.876 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.441 00:17:06.441 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.441 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.441 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.441 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.441 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.441 15:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.441 15:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.441 15:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.441 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.441 { 00:17:06.441 "cntlid": 73, 00:17:06.441 "qid": 0, 00:17:06.441 "state": "enabled", 00:17:06.441 "thread": "nvmf_tgt_poll_group_000", 00:17:06.441 "listen_address": { 00:17:06.441 "trtype": "TCP", 00:17:06.441 "adrfam": "IPv4", 00:17:06.441 "traddr": "10.0.0.2", 00:17:06.441 "trsvcid": "4420" 00:17:06.441 }, 00:17:06.441 "peer_address": { 00:17:06.441 "trtype": "TCP", 00:17:06.441 "adrfam": "IPv4", 00:17:06.441 "traddr": "10.0.0.1", 00:17:06.441 "trsvcid": "57634" 00:17:06.441 }, 00:17:06.441 "auth": { 00:17:06.441 "state": "completed", 00:17:06.441 "digest": "sha384", 00:17:06.441 "dhgroup": "ffdhe4096" 00:17:06.441 } 00:17:06.441 } 00:17:06.441 ]' 00:17:06.441 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.698 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.698 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.698 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.698 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.698 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.698 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.698 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.956 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.887 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.181 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.438 00:17:08.438 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.438 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.438 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.695 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.695 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.695 15:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.695 15:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.695 15:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.695 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.695 { 00:17:08.695 "cntlid": 75, 00:17:08.695 "qid": 0, 00:17:08.695 "state": "enabled", 00:17:08.695 "thread": "nvmf_tgt_poll_group_000", 00:17:08.695 "listen_address": { 00:17:08.695 "trtype": "TCP", 00:17:08.695 "adrfam": "IPv4", 00:17:08.695 "traddr": "10.0.0.2", 00:17:08.695 "trsvcid": "4420" 00:17:08.695 }, 00:17:08.695 "peer_address": { 00:17:08.695 "trtype": "TCP", 00:17:08.695 "adrfam": "IPv4", 00:17:08.695 "traddr": "10.0.0.1", 00:17:08.695 "trsvcid": "49754" 00:17:08.695 }, 00:17:08.695 "auth": { 00:17:08.695 "state": "completed", 00:17:08.695 "digest": "sha384", 00:17:08.695 "dhgroup": "ffdhe4096" 00:17:08.695 } 00:17:08.695 } 00:17:08.695 ]' 00:17:08.695 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.952 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.952 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.952 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:08.952 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.952 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.952 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.952 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.209 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.142 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.400 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.963 00:17:10.963 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.963 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.963 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.221 { 00:17:11.221 "cntlid": 77, 00:17:11.221 "qid": 0, 00:17:11.221 "state": "enabled", 00:17:11.221 "thread": "nvmf_tgt_poll_group_000", 00:17:11.221 "listen_address": { 00:17:11.221 "trtype": "TCP", 00:17:11.221 "adrfam": "IPv4", 00:17:11.221 "traddr": "10.0.0.2", 00:17:11.221 "trsvcid": "4420" 00:17:11.221 }, 00:17:11.221 "peer_address": { 00:17:11.221 "trtype": "TCP", 00:17:11.221 "adrfam": "IPv4", 00:17:11.221 "traddr": "10.0.0.1", 00:17:11.221 "trsvcid": "49786" 00:17:11.221 }, 00:17:11.221 "auth": { 00:17:11.221 "state": "completed", 00:17:11.221 "digest": "sha384", 00:17:11.221 "dhgroup": "ffdhe4096" 00:17:11.221 } 00:17:11.221 } 00:17:11.221 ]' 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.221 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.478 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.409 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.666 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.229 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.229 { 00:17:13.229 "cntlid": 79, 00:17:13.229 "qid": 0, 00:17:13.229 "state": "enabled", 00:17:13.229 "thread": "nvmf_tgt_poll_group_000", 00:17:13.229 "listen_address": { 00:17:13.229 "trtype": "TCP", 00:17:13.229 "adrfam": "IPv4", 00:17:13.229 "traddr": "10.0.0.2", 00:17:13.229 "trsvcid": "4420" 00:17:13.229 }, 00:17:13.229 "peer_address": { 00:17:13.229 "trtype": "TCP", 00:17:13.229 "adrfam": "IPv4", 00:17:13.229 "traddr": "10.0.0.1", 00:17:13.229 "trsvcid": "49828" 00:17:13.229 }, 00:17:13.229 "auth": { 00:17:13.229 "state": "completed", 00:17:13.229 "digest": "sha384", 00:17:13.229 "dhgroup": "ffdhe4096" 00:17:13.229 } 00:17:13.229 } 00:17:13.229 ]' 00:17:13.229 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.485 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.485 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.485 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.485 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.485 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.485 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.485 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.741 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.671 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:14.928 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:14.928 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.928 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.928 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:14.928 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:14.928 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.929 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.929 15:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.929 15:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.929 15:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.929 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.929 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.493 00:17:15.493 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.493 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.493 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.749 { 00:17:15.749 "cntlid": 81, 00:17:15.749 "qid": 0, 00:17:15.749 "state": "enabled", 00:17:15.749 "thread": "nvmf_tgt_poll_group_000", 00:17:15.749 "listen_address": { 00:17:15.749 "trtype": "TCP", 00:17:15.749 "adrfam": "IPv4", 00:17:15.749 "traddr": "10.0.0.2", 00:17:15.749 "trsvcid": "4420" 00:17:15.749 }, 00:17:15.749 "peer_address": { 00:17:15.749 "trtype": "TCP", 00:17:15.749 "adrfam": "IPv4", 00:17:15.749 "traddr": "10.0.0.1", 00:17:15.749 "trsvcid": "49856" 00:17:15.749 }, 00:17:15.749 "auth": { 00:17:15.749 "state": "completed", 00:17:15.749 "digest": "sha384", 00:17:15.749 "dhgroup": "ffdhe6144" 00:17:15.749 } 00:17:15.749 } 00:17:15.749 ]' 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.749 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.750 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:15.750 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.750 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.750 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.750 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.750 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.007 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.938 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.195 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.196 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.196 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.196 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.759 00:17:17.759 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.759 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.759 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.016 { 00:17:18.016 "cntlid": 83, 00:17:18.016 "qid": 0, 00:17:18.016 "state": "enabled", 00:17:18.016 "thread": "nvmf_tgt_poll_group_000", 00:17:18.016 "listen_address": { 00:17:18.016 "trtype": "TCP", 00:17:18.016 "adrfam": "IPv4", 00:17:18.016 "traddr": "10.0.0.2", 00:17:18.016 "trsvcid": "4420" 00:17:18.016 }, 00:17:18.016 "peer_address": { 00:17:18.016 "trtype": "TCP", 00:17:18.016 "adrfam": "IPv4", 00:17:18.016 "traddr": "10.0.0.1", 00:17:18.016 "trsvcid": "38424" 00:17:18.016 }, 00:17:18.016 "auth": { 00:17:18.016 "state": "completed", 00:17:18.016 "digest": "sha384", 00:17:18.016 "dhgroup": "ffdhe6144" 00:17:18.016 } 00:17:18.016 } 00:17:18.016 ]' 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.016 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.017 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.017 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.017 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.017 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.017 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.017 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.283 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.215 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.472 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.036 00:17:20.036 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.036 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.036 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.294 { 00:17:20.294 "cntlid": 85, 00:17:20.294 "qid": 0, 00:17:20.294 "state": "enabled", 00:17:20.294 "thread": "nvmf_tgt_poll_group_000", 00:17:20.294 "listen_address": { 00:17:20.294 "trtype": "TCP", 00:17:20.294 "adrfam": "IPv4", 00:17:20.294 "traddr": "10.0.0.2", 00:17:20.294 "trsvcid": "4420" 00:17:20.294 }, 00:17:20.294 "peer_address": { 00:17:20.294 "trtype": "TCP", 00:17:20.294 "adrfam": "IPv4", 00:17:20.294 "traddr": "10.0.0.1", 00:17:20.294 "trsvcid": "38458" 00:17:20.294 }, 00:17:20.294 "auth": { 00:17:20.294 "state": "completed", 00:17:20.294 "digest": "sha384", 00:17:20.294 "dhgroup": "ffdhe6144" 00:17:20.294 } 00:17:20.294 } 00:17:20.294 ]' 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.294 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.550 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.550 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.550 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.550 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.550 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.807 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.738 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.023 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.588 00:17:22.588 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.588 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.588 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.588 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.588 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.588 15:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.588 15:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.844 15:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.844 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.844 { 00:17:22.844 "cntlid": 87, 00:17:22.844 "qid": 0, 00:17:22.844 "state": "enabled", 00:17:22.844 "thread": "nvmf_tgt_poll_group_000", 00:17:22.844 "listen_address": { 00:17:22.845 "trtype": "TCP", 00:17:22.845 "adrfam": "IPv4", 00:17:22.845 "traddr": "10.0.0.2", 00:17:22.845 "trsvcid": "4420" 00:17:22.845 }, 00:17:22.845 "peer_address": { 00:17:22.845 "trtype": "TCP", 00:17:22.845 "adrfam": "IPv4", 00:17:22.845 "traddr": "10.0.0.1", 00:17:22.845 "trsvcid": "38478" 00:17:22.845 }, 00:17:22.845 "auth": { 00:17:22.845 "state": "completed", 00:17:22.845 "digest": "sha384", 00:17:22.845 "dhgroup": "ffdhe6144" 00:17:22.845 } 00:17:22.845 } 00:17:22.845 ]' 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.845 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.102 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.033 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.290 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.291 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.291 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.223 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.223 { 00:17:25.223 "cntlid": 89, 00:17:25.223 "qid": 0, 00:17:25.223 "state": "enabled", 00:17:25.223 "thread": "nvmf_tgt_poll_group_000", 00:17:25.223 "listen_address": { 00:17:25.223 "trtype": "TCP", 00:17:25.223 "adrfam": "IPv4", 00:17:25.223 "traddr": "10.0.0.2", 00:17:25.223 "trsvcid": "4420" 00:17:25.223 }, 00:17:25.223 "peer_address": { 00:17:25.223 "trtype": "TCP", 00:17:25.223 "adrfam": "IPv4", 00:17:25.223 "traddr": "10.0.0.1", 00:17:25.223 "trsvcid": "38500" 00:17:25.223 }, 00:17:25.223 "auth": { 00:17:25.223 "state": "completed", 00:17:25.223 "digest": "sha384", 00:17:25.223 "dhgroup": "ffdhe8192" 00:17:25.223 } 00:17:25.223 } 00:17:25.223 ]' 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.223 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.480 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:25.480 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.480 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.480 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.480 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.737 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.666 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.923 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.855 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.855 { 00:17:27.855 "cntlid": 91, 00:17:27.855 "qid": 0, 00:17:27.855 "state": "enabled", 00:17:27.855 "thread": "nvmf_tgt_poll_group_000", 00:17:27.855 "listen_address": { 00:17:27.855 "trtype": "TCP", 00:17:27.855 "adrfam": "IPv4", 00:17:27.855 "traddr": "10.0.0.2", 00:17:27.855 "trsvcid": "4420" 00:17:27.855 }, 00:17:27.855 "peer_address": { 00:17:27.855 "trtype": "TCP", 00:17:27.855 "adrfam": "IPv4", 00:17:27.855 "traddr": "10.0.0.1", 00:17:27.855 "trsvcid": "50150" 00:17:27.855 }, 00:17:27.855 "auth": { 00:17:27.855 "state": "completed", 00:17:27.855 "digest": "sha384", 00:17:27.855 "dhgroup": "ffdhe8192" 00:17:27.855 } 00:17:27.855 } 00:17:27.855 ]' 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.855 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.112 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.112 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.112 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.112 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.112 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.370 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.302 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.559 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.491 00:17:30.491 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.491 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.491 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.491 { 00:17:30.491 "cntlid": 93, 00:17:30.491 "qid": 0, 00:17:30.491 "state": "enabled", 00:17:30.491 "thread": "nvmf_tgt_poll_group_000", 00:17:30.491 "listen_address": { 00:17:30.491 "trtype": "TCP", 00:17:30.491 "adrfam": "IPv4", 00:17:30.491 "traddr": "10.0.0.2", 00:17:30.491 "trsvcid": "4420" 00:17:30.491 }, 00:17:30.491 "peer_address": { 00:17:30.491 "trtype": "TCP", 00:17:30.491 "adrfam": "IPv4", 00:17:30.491 "traddr": "10.0.0.1", 00:17:30.491 "trsvcid": "50176" 00:17:30.491 }, 00:17:30.491 "auth": { 00:17:30.491 "state": "completed", 00:17:30.491 "digest": "sha384", 00:17:30.491 "dhgroup": "ffdhe8192" 00:17:30.491 } 00:17:30.491 } 00:17:30.491 ]' 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.491 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.749 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.749 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.749 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.749 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.749 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.008 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.939 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.196 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.196 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.196 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.760 00:17:32.760 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.760 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.760 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.016 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.016 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.016 15:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.016 15:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.273 { 00:17:33.273 "cntlid": 95, 00:17:33.273 "qid": 0, 00:17:33.273 "state": "enabled", 00:17:33.273 "thread": "nvmf_tgt_poll_group_000", 00:17:33.273 "listen_address": { 00:17:33.273 "trtype": "TCP", 00:17:33.273 "adrfam": "IPv4", 00:17:33.273 "traddr": "10.0.0.2", 00:17:33.273 "trsvcid": "4420" 00:17:33.273 }, 00:17:33.273 "peer_address": { 00:17:33.273 "trtype": "TCP", 00:17:33.273 "adrfam": "IPv4", 00:17:33.273 "traddr": "10.0.0.1", 00:17:33.273 "trsvcid": "50196" 00:17:33.273 }, 00:17:33.273 "auth": { 00:17:33.273 "state": "completed", 00:17:33.273 "digest": "sha384", 00:17:33.273 "dhgroup": "ffdhe8192" 00:17:33.273 } 00:17:33.273 } 00:17:33.273 ]' 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.273 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.531 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.462 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.720 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.977 00:17:34.977 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.977 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.977 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.234 { 00:17:35.234 "cntlid": 97, 00:17:35.234 "qid": 0, 00:17:35.234 "state": "enabled", 00:17:35.234 "thread": "nvmf_tgt_poll_group_000", 00:17:35.234 "listen_address": { 00:17:35.234 "trtype": "TCP", 00:17:35.234 "adrfam": "IPv4", 00:17:35.234 "traddr": "10.0.0.2", 00:17:35.234 "trsvcid": "4420" 00:17:35.234 }, 00:17:35.234 "peer_address": { 00:17:35.234 "trtype": "TCP", 00:17:35.234 "adrfam": "IPv4", 00:17:35.234 "traddr": "10.0.0.1", 00:17:35.234 "trsvcid": "50232" 00:17:35.234 }, 00:17:35.234 "auth": { 00:17:35.234 "state": "completed", 00:17:35.234 "digest": "sha512", 00:17:35.234 "dhgroup": "null" 00:17:35.234 } 00:17:35.234 } 00:17:35.234 ]' 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.234 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.490 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.490 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.490 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.749 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.726 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.289 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.289 { 00:17:37.289 "cntlid": 99, 00:17:37.289 "qid": 0, 00:17:37.289 "state": "enabled", 00:17:37.289 "thread": "nvmf_tgt_poll_group_000", 00:17:37.289 "listen_address": { 00:17:37.289 "trtype": "TCP", 00:17:37.289 "adrfam": "IPv4", 00:17:37.289 "traddr": "10.0.0.2", 00:17:37.289 "trsvcid": "4420" 00:17:37.289 }, 00:17:37.289 "peer_address": { 00:17:37.289 "trtype": "TCP", 00:17:37.289 "adrfam": "IPv4", 00:17:37.289 "traddr": "10.0.0.1", 00:17:37.289 "trsvcid": "51812" 00:17:37.289 }, 00:17:37.289 "auth": { 00:17:37.289 "state": "completed", 00:17:37.289 "digest": "sha512", 00:17:37.289 "dhgroup": "null" 00:17:37.289 } 00:17:37.289 } 00:17:37.289 ]' 00:17:37.289 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.546 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.546 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.546 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.546 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.546 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.546 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.546 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.803 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.747 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.004 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.261 00:17:39.261 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.261 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.261 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.519 { 00:17:39.519 "cntlid": 101, 00:17:39.519 "qid": 0, 00:17:39.519 "state": "enabled", 00:17:39.519 "thread": "nvmf_tgt_poll_group_000", 00:17:39.519 "listen_address": { 00:17:39.519 "trtype": "TCP", 00:17:39.519 "adrfam": "IPv4", 00:17:39.519 "traddr": "10.0.0.2", 00:17:39.519 "trsvcid": "4420" 00:17:39.519 }, 00:17:39.519 "peer_address": { 00:17:39.519 "trtype": "TCP", 00:17:39.519 "adrfam": "IPv4", 00:17:39.519 "traddr": "10.0.0.1", 00:17:39.519 "trsvcid": "51840" 00:17:39.519 }, 00:17:39.519 "auth": { 00:17:39.519 "state": "completed", 00:17:39.519 "digest": "sha512", 00:17:39.519 "dhgroup": "null" 00:17:39.519 } 00:17:39.519 } 00:17:39.519 ]' 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.519 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.776 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.709 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:40.967 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.224 00:17:41.481 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.481 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.481 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.481 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.481 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.481 15:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.481 15:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.481 15:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.481 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.481 { 00:17:41.481 "cntlid": 103, 00:17:41.481 "qid": 0, 00:17:41.481 "state": "enabled", 00:17:41.481 "thread": "nvmf_tgt_poll_group_000", 00:17:41.481 "listen_address": { 00:17:41.481 "trtype": "TCP", 00:17:41.481 "adrfam": "IPv4", 00:17:41.481 "traddr": "10.0.0.2", 00:17:41.481 "trsvcid": "4420" 00:17:41.481 }, 00:17:41.481 "peer_address": { 00:17:41.481 "trtype": "TCP", 00:17:41.481 "adrfam": "IPv4", 00:17:41.481 "traddr": "10.0.0.1", 00:17:41.481 "trsvcid": "51878" 00:17:41.481 }, 00:17:41.481 "auth": { 00:17:41.481 "state": "completed", 00:17:41.481 "digest": "sha512", 00:17:41.481 "dhgroup": "null" 00:17:41.481 } 00:17:41.481 } 00:17:41.481 ]' 00:17:41.481 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.739 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.739 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.739 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.739 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.739 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.739 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.739 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.996 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:42.928 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.185 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.443 00:17:43.443 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.443 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.443 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.699 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.699 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.699 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.699 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.699 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.699 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.699 { 00:17:43.699 "cntlid": 105, 00:17:43.699 "qid": 0, 00:17:43.699 "state": "enabled", 00:17:43.699 "thread": "nvmf_tgt_poll_group_000", 00:17:43.699 "listen_address": { 00:17:43.700 "trtype": "TCP", 00:17:43.700 "adrfam": "IPv4", 00:17:43.700 "traddr": "10.0.0.2", 00:17:43.700 "trsvcid": "4420" 00:17:43.700 }, 00:17:43.700 "peer_address": { 00:17:43.700 "trtype": "TCP", 00:17:43.700 "adrfam": "IPv4", 00:17:43.700 "traddr": "10.0.0.1", 00:17:43.700 "trsvcid": "51924" 00:17:43.700 }, 00:17:43.700 "auth": { 00:17:43.700 "state": "completed", 00:17:43.700 "digest": "sha512", 00:17:43.700 "dhgroup": "ffdhe2048" 00:17:43.700 } 00:17:43.700 } 00:17:43.700 ]' 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.700 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.264 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.193 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.194 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.194 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.194 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.194 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.194 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.194 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.759 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.759 { 00:17:45.759 "cntlid": 107, 00:17:45.759 "qid": 0, 00:17:45.759 "state": "enabled", 00:17:45.759 "thread": "nvmf_tgt_poll_group_000", 00:17:45.759 "listen_address": { 00:17:45.759 "trtype": "TCP", 00:17:45.759 "adrfam": "IPv4", 00:17:45.759 "traddr": "10.0.0.2", 00:17:45.759 "trsvcid": "4420" 00:17:45.759 }, 00:17:45.759 "peer_address": { 00:17:45.759 "trtype": "TCP", 00:17:45.759 "adrfam": "IPv4", 00:17:45.759 "traddr": "10.0.0.1", 00:17:45.759 "trsvcid": "51944" 00:17:45.759 }, 00:17:45.759 "auth": { 00:17:45.759 "state": "completed", 00:17:45.759 "digest": "sha512", 00:17:45.759 "dhgroup": "ffdhe2048" 00:17:45.759 } 00:17:45.759 } 00:17:45.759 ]' 00:17:45.759 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.016 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.016 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.016 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.016 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.016 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.016 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.016 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.273 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.205 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.770 00:17:47.770 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.770 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.770 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.770 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.770 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.770 15:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.770 15:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.027 { 00:17:48.027 "cntlid": 109, 00:17:48.027 "qid": 0, 00:17:48.027 "state": "enabled", 00:17:48.027 "thread": "nvmf_tgt_poll_group_000", 00:17:48.027 "listen_address": { 00:17:48.027 "trtype": "TCP", 00:17:48.027 "adrfam": "IPv4", 00:17:48.027 "traddr": "10.0.0.2", 00:17:48.027 "trsvcid": "4420" 00:17:48.027 }, 00:17:48.027 "peer_address": { 00:17:48.027 "trtype": "TCP", 00:17:48.027 "adrfam": "IPv4", 00:17:48.027 "traddr": "10.0.0.1", 00:17:48.027 "trsvcid": "44342" 00:17:48.027 }, 00:17:48.027 "auth": { 00:17:48.027 "state": "completed", 00:17:48.027 "digest": "sha512", 00:17:48.027 "dhgroup": "ffdhe2048" 00:17:48.027 } 00:17:48.027 } 00:17:48.027 ]' 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.027 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.284 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.217 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.474 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.738 00:17:49.738 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.738 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.738 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.027 { 00:17:50.027 "cntlid": 111, 00:17:50.027 "qid": 0, 00:17:50.027 "state": "enabled", 00:17:50.027 "thread": "nvmf_tgt_poll_group_000", 00:17:50.027 "listen_address": { 00:17:50.027 "trtype": "TCP", 00:17:50.027 "adrfam": "IPv4", 00:17:50.027 "traddr": "10.0.0.2", 00:17:50.027 "trsvcid": "4420" 00:17:50.027 }, 00:17:50.027 "peer_address": { 00:17:50.027 "trtype": "TCP", 00:17:50.027 "adrfam": "IPv4", 00:17:50.027 "traddr": "10.0.0.1", 00:17:50.027 "trsvcid": "44376" 00:17:50.027 }, 00:17:50.027 "auth": { 00:17:50.027 "state": "completed", 00:17:50.027 "digest": "sha512", 00:17:50.027 "dhgroup": "ffdhe2048" 00:17:50.027 } 00:17:50.027 } 00:17:50.027 ]' 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.027 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.285 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.217 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.474 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.038 00:17:52.038 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.038 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.038 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.295 { 00:17:52.295 "cntlid": 113, 00:17:52.295 "qid": 0, 00:17:52.295 "state": "enabled", 00:17:52.295 "thread": "nvmf_tgt_poll_group_000", 00:17:52.295 "listen_address": { 00:17:52.295 "trtype": "TCP", 00:17:52.295 "adrfam": "IPv4", 00:17:52.295 "traddr": "10.0.0.2", 00:17:52.295 "trsvcid": "4420" 00:17:52.295 }, 00:17:52.295 "peer_address": { 00:17:52.295 "trtype": "TCP", 00:17:52.295 "adrfam": "IPv4", 00:17:52.295 "traddr": "10.0.0.1", 00:17:52.295 "trsvcid": "44398" 00:17:52.295 }, 00:17:52.295 "auth": { 00:17:52.295 "state": "completed", 00:17:52.295 "digest": "sha512", 00:17:52.295 "dhgroup": "ffdhe3072" 00:17:52.295 } 00:17:52.295 } 00:17:52.295 ]' 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.295 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.552 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.483 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.740 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.998 00:17:53.998 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.998 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.998 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.256 { 00:17:54.256 "cntlid": 115, 00:17:54.256 "qid": 0, 00:17:54.256 "state": "enabled", 00:17:54.256 "thread": "nvmf_tgt_poll_group_000", 00:17:54.256 "listen_address": { 00:17:54.256 "trtype": "TCP", 00:17:54.256 "adrfam": "IPv4", 00:17:54.256 "traddr": "10.0.0.2", 00:17:54.256 "trsvcid": "4420" 00:17:54.256 }, 00:17:54.256 "peer_address": { 00:17:54.256 "trtype": "TCP", 00:17:54.256 "adrfam": "IPv4", 00:17:54.256 "traddr": "10.0.0.1", 00:17:54.256 "trsvcid": "44444" 00:17:54.256 }, 00:17:54.256 "auth": { 00:17:54.256 "state": "completed", 00:17:54.256 "digest": "sha512", 00:17:54.256 "dhgroup": "ffdhe3072" 00:17:54.256 } 00:17:54.256 } 00:17:54.256 ]' 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.256 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.514 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.514 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.514 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.771 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.702 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.959 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.960 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.217 00:17:56.217 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.217 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.217 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.474 { 00:17:56.474 "cntlid": 117, 00:17:56.474 "qid": 0, 00:17:56.474 "state": "enabled", 00:17:56.474 "thread": "nvmf_tgt_poll_group_000", 00:17:56.474 "listen_address": { 00:17:56.474 "trtype": "TCP", 00:17:56.474 "adrfam": "IPv4", 00:17:56.474 "traddr": "10.0.0.2", 00:17:56.474 "trsvcid": "4420" 00:17:56.474 }, 00:17:56.474 "peer_address": { 00:17:56.474 "trtype": "TCP", 00:17:56.474 "adrfam": "IPv4", 00:17:56.474 "traddr": "10.0.0.1", 00:17:56.474 "trsvcid": "44476" 00:17:56.474 }, 00:17:56.474 "auth": { 00:17:56.474 "state": "completed", 00:17:56.474 "digest": "sha512", 00:17:56.474 "dhgroup": "ffdhe3072" 00:17:56.474 } 00:17:56.474 } 00:17:56.474 ]' 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.474 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.732 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.664 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.921 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.486 00:17:58.486 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.486 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.486 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.743 { 00:17:58.743 "cntlid": 119, 00:17:58.743 "qid": 0, 00:17:58.743 "state": "enabled", 00:17:58.743 "thread": "nvmf_tgt_poll_group_000", 00:17:58.743 "listen_address": { 00:17:58.743 "trtype": "TCP", 00:17:58.743 "adrfam": "IPv4", 00:17:58.743 "traddr": "10.0.0.2", 00:17:58.743 "trsvcid": "4420" 00:17:58.743 }, 00:17:58.743 "peer_address": { 00:17:58.743 "trtype": "TCP", 00:17:58.743 "adrfam": "IPv4", 00:17:58.743 "traddr": "10.0.0.1", 00:17:58.743 "trsvcid": "48530" 00:17:58.743 }, 00:17:58.743 "auth": { 00:17:58.743 "state": "completed", 00:17:58.743 "digest": "sha512", 00:17:58.743 "dhgroup": "ffdhe3072" 00:17:58.743 } 00:17:58.743 } 00:17:58.743 ]' 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.743 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.000 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.930 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.187 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.445 00:18:00.445 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.445 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.445 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.702 { 00:18:00.702 "cntlid": 121, 00:18:00.702 "qid": 0, 00:18:00.702 "state": "enabled", 00:18:00.702 "thread": "nvmf_tgt_poll_group_000", 00:18:00.702 "listen_address": { 00:18:00.702 "trtype": "TCP", 00:18:00.702 "adrfam": "IPv4", 00:18:00.702 "traddr": "10.0.0.2", 00:18:00.702 "trsvcid": "4420" 00:18:00.702 }, 00:18:00.702 "peer_address": { 00:18:00.702 "trtype": "TCP", 00:18:00.702 "adrfam": "IPv4", 00:18:00.702 "traddr": "10.0.0.1", 00:18:00.702 "trsvcid": "48556" 00:18:00.702 }, 00:18:00.702 "auth": { 00:18:00.702 "state": "completed", 00:18:00.702 "digest": "sha512", 00:18:00.702 "dhgroup": "ffdhe4096" 00:18:00.702 } 00:18:00.702 } 00:18:00.702 ]' 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.702 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.959 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.959 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.959 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.959 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.959 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.216 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.146 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.403 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.660 00:18:02.660 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.660 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.660 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.916 { 00:18:02.916 "cntlid": 123, 00:18:02.916 "qid": 0, 00:18:02.916 "state": "enabled", 00:18:02.916 "thread": "nvmf_tgt_poll_group_000", 00:18:02.916 "listen_address": { 00:18:02.916 "trtype": "TCP", 00:18:02.916 "adrfam": "IPv4", 00:18:02.916 "traddr": "10.0.0.2", 00:18:02.916 "trsvcid": "4420" 00:18:02.916 }, 00:18:02.916 "peer_address": { 00:18:02.916 "trtype": "TCP", 00:18:02.916 "adrfam": "IPv4", 00:18:02.916 "traddr": "10.0.0.1", 00:18:02.916 "trsvcid": "48594" 00:18:02.916 }, 00:18:02.916 "auth": { 00:18:02.916 "state": "completed", 00:18:02.916 "digest": "sha512", 00:18:02.916 "dhgroup": "ffdhe4096" 00:18:02.916 } 00:18:02.916 } 00:18:02.916 ]' 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.916 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.172 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.172 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.172 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.172 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.172 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.432 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.399 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.399 15:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.656 15:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.656 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.656 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.913 00:18:04.913 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.913 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.913 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.170 { 00:18:05.170 "cntlid": 125, 00:18:05.170 "qid": 0, 00:18:05.170 "state": "enabled", 00:18:05.170 "thread": "nvmf_tgt_poll_group_000", 00:18:05.170 "listen_address": { 00:18:05.170 "trtype": "TCP", 00:18:05.170 "adrfam": "IPv4", 00:18:05.170 "traddr": "10.0.0.2", 00:18:05.170 "trsvcid": "4420" 00:18:05.170 }, 00:18:05.170 "peer_address": { 00:18:05.170 "trtype": "TCP", 00:18:05.170 "adrfam": "IPv4", 00:18:05.170 "traddr": "10.0.0.1", 00:18:05.170 "trsvcid": "48614" 00:18:05.170 }, 00:18:05.170 "auth": { 00:18:05.170 "state": "completed", 00:18:05.170 "digest": "sha512", 00:18:05.170 "dhgroup": "ffdhe4096" 00:18:05.170 } 00:18:05.170 } 00:18:05.170 ]' 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.170 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.428 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.360 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.618 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.895 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.895 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.895 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.152 00:18:07.152 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.152 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.152 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.409 { 00:18:07.409 "cntlid": 127, 00:18:07.409 "qid": 0, 00:18:07.409 "state": "enabled", 00:18:07.409 "thread": "nvmf_tgt_poll_group_000", 00:18:07.409 "listen_address": { 00:18:07.409 "trtype": "TCP", 00:18:07.409 "adrfam": "IPv4", 00:18:07.409 "traddr": "10.0.0.2", 00:18:07.409 "trsvcid": "4420" 00:18:07.409 }, 00:18:07.409 "peer_address": { 00:18:07.409 "trtype": "TCP", 00:18:07.409 "adrfam": "IPv4", 00:18:07.409 "traddr": "10.0.0.1", 00:18:07.409 "trsvcid": "58954" 00:18:07.409 }, 00:18:07.409 "auth": { 00:18:07.409 "state": "completed", 00:18:07.409 "digest": "sha512", 00:18:07.409 "dhgroup": "ffdhe4096" 00:18:07.409 } 00:18:07.409 } 00:18:07.409 ]' 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.409 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.666 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.666 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.666 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.923 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.855 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.420 00:18:09.420 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.420 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.420 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.677 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.677 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.677 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.678 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.678 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.678 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.678 { 00:18:09.678 "cntlid": 129, 00:18:09.678 "qid": 0, 00:18:09.678 "state": "enabled", 00:18:09.678 "thread": "nvmf_tgt_poll_group_000", 00:18:09.678 "listen_address": { 00:18:09.678 "trtype": "TCP", 00:18:09.678 "adrfam": "IPv4", 00:18:09.678 "traddr": "10.0.0.2", 00:18:09.678 "trsvcid": "4420" 00:18:09.678 }, 00:18:09.678 "peer_address": { 00:18:09.678 "trtype": "TCP", 00:18:09.678 "adrfam": "IPv4", 00:18:09.678 "traddr": "10.0.0.1", 00:18:09.678 "trsvcid": "58986" 00:18:09.678 }, 00:18:09.678 "auth": { 00:18:09.678 "state": "completed", 00:18:09.678 "digest": "sha512", 00:18:09.678 "dhgroup": "ffdhe6144" 00:18:09.678 } 00:18:09.678 } 00:18:09.678 ]' 00:18:09.678 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.935 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.935 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.935 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.935 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.935 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.935 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.935 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.192 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.124 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.381 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.945 00:18:11.945 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.945 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.945 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.202 { 00:18:12.202 "cntlid": 131, 00:18:12.202 "qid": 0, 00:18:12.202 "state": "enabled", 00:18:12.202 "thread": "nvmf_tgt_poll_group_000", 00:18:12.202 "listen_address": { 00:18:12.202 "trtype": "TCP", 00:18:12.202 "adrfam": "IPv4", 00:18:12.202 "traddr": "10.0.0.2", 00:18:12.202 "trsvcid": "4420" 00:18:12.202 }, 00:18:12.202 "peer_address": { 00:18:12.202 "trtype": "TCP", 00:18:12.202 "adrfam": "IPv4", 00:18:12.202 "traddr": "10.0.0.1", 00:18:12.202 "trsvcid": "59014" 00:18:12.202 }, 00:18:12.202 "auth": { 00:18:12.202 "state": "completed", 00:18:12.202 "digest": "sha512", 00:18:12.202 "dhgroup": "ffdhe6144" 00:18:12.202 } 00:18:12.202 } 00:18:12.202 ]' 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.202 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.459 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.459 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.459 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.716 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.648 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.213 00:18:14.214 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.214 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.214 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.471 { 00:18:14.471 "cntlid": 133, 00:18:14.471 "qid": 0, 00:18:14.471 "state": "enabled", 00:18:14.471 "thread": "nvmf_tgt_poll_group_000", 00:18:14.471 "listen_address": { 00:18:14.471 "trtype": "TCP", 00:18:14.471 "adrfam": "IPv4", 00:18:14.471 "traddr": "10.0.0.2", 00:18:14.471 "trsvcid": "4420" 00:18:14.471 }, 00:18:14.471 "peer_address": { 00:18:14.471 "trtype": "TCP", 00:18:14.471 "adrfam": "IPv4", 00:18:14.471 "traddr": "10.0.0.1", 00:18:14.471 "trsvcid": "59032" 00:18:14.471 }, 00:18:14.471 "auth": { 00:18:14.471 "state": "completed", 00:18:14.471 "digest": "sha512", 00:18:14.471 "dhgroup": "ffdhe6144" 00:18:14.471 } 00:18:14.471 } 00:18:14.471 ]' 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.471 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.730 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.730 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.730 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.730 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:18:15.663 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.663 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.663 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:15.663 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.663 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.920 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.920 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.920 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.920 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.178 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.742 00:18:16.742 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.742 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.742 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.999 { 00:18:16.999 "cntlid": 135, 00:18:16.999 "qid": 0, 00:18:16.999 "state": "enabled", 00:18:16.999 "thread": "nvmf_tgt_poll_group_000", 00:18:16.999 "listen_address": { 00:18:16.999 "trtype": "TCP", 00:18:16.999 "adrfam": "IPv4", 00:18:16.999 "traddr": "10.0.0.2", 00:18:16.999 "trsvcid": "4420" 00:18:16.999 }, 00:18:16.999 "peer_address": { 00:18:16.999 "trtype": "TCP", 00:18:16.999 "adrfam": "IPv4", 00:18:16.999 "traddr": "10.0.0.1", 00:18:16.999 "trsvcid": "59062" 00:18:16.999 }, 00:18:16.999 "auth": { 00:18:16.999 "state": "completed", 00:18:16.999 "digest": "sha512", 00:18:16.999 "dhgroup": "ffdhe6144" 00:18:16.999 } 00:18:16.999 } 00:18:16.999 ]' 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.999 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.000 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.000 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.000 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.000 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.270 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.235 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.492 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.425 00:18:19.425 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.425 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.425 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.682 { 00:18:19.682 "cntlid": 137, 00:18:19.682 "qid": 0, 00:18:19.682 "state": "enabled", 00:18:19.682 "thread": "nvmf_tgt_poll_group_000", 00:18:19.682 "listen_address": { 00:18:19.682 "trtype": "TCP", 00:18:19.682 "adrfam": "IPv4", 00:18:19.682 "traddr": "10.0.0.2", 00:18:19.682 "trsvcid": "4420" 00:18:19.682 }, 00:18:19.682 "peer_address": { 00:18:19.682 "trtype": "TCP", 00:18:19.682 "adrfam": "IPv4", 00:18:19.682 "traddr": "10.0.0.1", 00:18:19.682 "trsvcid": "39520" 00:18:19.682 }, 00:18:19.682 "auth": { 00:18:19.682 "state": "completed", 00:18:19.682 "digest": "sha512", 00:18:19.682 "dhgroup": "ffdhe8192" 00:18:19.682 } 00:18:19.682 } 00:18:19.682 ]' 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.682 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.939 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.870 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.128 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.060 00:18:22.060 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.060 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.060 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.318 { 00:18:22.318 "cntlid": 139, 00:18:22.318 "qid": 0, 00:18:22.318 "state": "enabled", 00:18:22.318 "thread": "nvmf_tgt_poll_group_000", 00:18:22.318 "listen_address": { 00:18:22.318 "trtype": "TCP", 00:18:22.318 "adrfam": "IPv4", 00:18:22.318 "traddr": "10.0.0.2", 00:18:22.318 "trsvcid": "4420" 00:18:22.318 }, 00:18:22.318 "peer_address": { 00:18:22.318 "trtype": "TCP", 00:18:22.318 "adrfam": "IPv4", 00:18:22.318 "traddr": "10.0.0.1", 00:18:22.318 "trsvcid": "39556" 00:18:22.318 }, 00:18:22.318 "auth": { 00:18:22.318 "state": "completed", 00:18:22.318 "digest": "sha512", 00:18:22.318 "dhgroup": "ffdhe8192" 00:18:22.318 } 00:18:22.318 } 00:18:22.318 ]' 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.318 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.575 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:ZTY3YTljMDA3OWY4NWQ3NzE5YjcxNmIzYWY4OWNkYzIe7Jip: --dhchap-ctrl-secret DHHC-1:02:MmZiZTVhNzVjMDJhMGQwMzFjNWY5Njc5YjAwYTExNDA3ODg0ZWE5NTJlYjQ5YWQ0KNPA4Q==: 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.507 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.765 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.766 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.766 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.766 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.696 00:18:24.696 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.696 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.696 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.953 { 00:18:24.953 "cntlid": 141, 00:18:24.953 "qid": 0, 00:18:24.953 "state": "enabled", 00:18:24.953 "thread": "nvmf_tgt_poll_group_000", 00:18:24.953 "listen_address": { 00:18:24.953 "trtype": "TCP", 00:18:24.953 "adrfam": "IPv4", 00:18:24.953 "traddr": "10.0.0.2", 00:18:24.953 "trsvcid": "4420" 00:18:24.953 }, 00:18:24.953 "peer_address": { 00:18:24.953 "trtype": "TCP", 00:18:24.953 "adrfam": "IPv4", 00:18:24.953 "traddr": "10.0.0.1", 00:18:24.953 "trsvcid": "39586" 00:18:24.953 }, 00:18:24.953 "auth": { 00:18:24.953 "state": "completed", 00:18:24.953 "digest": "sha512", 00:18:24.953 "dhgroup": "ffdhe8192" 00:18:24.953 } 00:18:24.953 } 00:18:24.953 ]' 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.953 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.211 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:MjhlYTNkY2YyY2ExYWNlZWNjYjhjNGVlMTY5Nzc0ZjYyYTA1OGUzZTdiZDM2ZGI2TBtjNw==: --dhchap-ctrl-secret DHHC-1:01:NDQ5ZDk4NWEwOGZmNTE2MTBiYzJiZDVjMzE3Y2Y5OTHXbvmE: 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.142 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.398 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.329 00:18:27.329 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.329 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.329 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.586 { 00:18:27.586 "cntlid": 143, 00:18:27.586 "qid": 0, 00:18:27.586 "state": "enabled", 00:18:27.586 "thread": "nvmf_tgt_poll_group_000", 00:18:27.586 "listen_address": { 00:18:27.586 "trtype": "TCP", 00:18:27.586 "adrfam": "IPv4", 00:18:27.586 "traddr": "10.0.0.2", 00:18:27.586 "trsvcid": "4420" 00:18:27.586 }, 00:18:27.586 "peer_address": { 00:18:27.586 "trtype": "TCP", 00:18:27.586 "adrfam": "IPv4", 00:18:27.586 "traddr": "10.0.0.1", 00:18:27.586 "trsvcid": "47018" 00:18:27.586 }, 00:18:27.586 "auth": { 00:18:27.586 "state": "completed", 00:18:27.586 "digest": "sha512", 00:18:27.586 "dhgroup": "ffdhe8192" 00:18:27.586 } 00:18:27.586 } 00:18:27.586 ]' 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.586 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.843 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:18:28.772 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.773 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.029 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.959 00:18:29.959 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.959 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.959 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.217 { 00:18:30.217 "cntlid": 145, 00:18:30.217 "qid": 0, 00:18:30.217 "state": "enabled", 00:18:30.217 "thread": "nvmf_tgt_poll_group_000", 00:18:30.217 "listen_address": { 00:18:30.217 "trtype": "TCP", 00:18:30.217 "adrfam": "IPv4", 00:18:30.217 "traddr": "10.0.0.2", 00:18:30.217 "trsvcid": "4420" 00:18:30.217 }, 00:18:30.217 "peer_address": { 00:18:30.217 "trtype": "TCP", 00:18:30.217 "adrfam": "IPv4", 00:18:30.217 "traddr": "10.0.0.1", 00:18:30.217 "trsvcid": "47052" 00:18:30.217 }, 00:18:30.217 "auth": { 00:18:30.217 "state": "completed", 00:18:30.217 "digest": "sha512", 00:18:30.217 "dhgroup": "ffdhe8192" 00:18:30.217 } 00:18:30.217 } 00:18:30.217 ]' 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.217 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.474 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:N2RjNzBiNjQzYWU1MjI4OWI4NTczZjRmMDhiMjAyMWNkNTQ3NjcwZDc2OTk4MDZmYQsPmg==: --dhchap-ctrl-secret DHHC-1:03:ZmYxOTM3M2Y1NjkxMjkwMGU0MzdjNWY5NDFiYjY5ODRlOWM5NzhkMTY2NjE2MzVlODFkYmIxODk5ZDk3ZWZhMBANDKU=: 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:31.406 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:32.392 request: 00:18:32.392 { 00:18:32.392 "name": "nvme0", 00:18:32.392 "trtype": "tcp", 00:18:32.392 "traddr": "10.0.0.2", 00:18:32.392 "adrfam": "ipv4", 00:18:32.392 "trsvcid": "4420", 00:18:32.392 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:32.392 "prchk_reftag": false, 00:18:32.392 "prchk_guard": false, 00:18:32.392 "hdgst": false, 00:18:32.392 "ddgst": false, 00:18:32.392 "dhchap_key": "key2", 00:18:32.392 "method": "bdev_nvme_attach_controller", 00:18:32.392 "req_id": 1 00:18:32.392 } 00:18:32.392 Got JSON-RPC error response 00:18:32.392 response: 00:18:32.392 { 00:18:32.392 "code": -5, 00:18:32.392 "message": "Input/output error" 00:18:32.392 } 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.392 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:32.957 request: 00:18:32.957 { 00:18:32.957 "name": "nvme0", 00:18:32.957 "trtype": "tcp", 00:18:32.957 "traddr": "10.0.0.2", 00:18:32.957 "adrfam": "ipv4", 00:18:32.957 "trsvcid": "4420", 00:18:32.957 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:32.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:32.957 "prchk_reftag": false, 00:18:32.957 "prchk_guard": false, 00:18:32.957 "hdgst": false, 00:18:32.957 "ddgst": false, 00:18:32.957 "dhchap_key": "key1", 00:18:32.957 "dhchap_ctrlr_key": "ckey2", 00:18:32.957 "method": "bdev_nvme_attach_controller", 00:18:32.957 "req_id": 1 00:18:32.957 } 00:18:32.957 Got JSON-RPC error response 00:18:32.957 response: 00:18:32.957 { 00:18:32.957 "code": -5, 00:18:32.957 "message": "Input/output error" 00:18:32.957 } 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.214 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.777 request: 00:18:33.777 { 00:18:33.777 "name": "nvme0", 00:18:33.777 "trtype": "tcp", 00:18:33.777 "traddr": "10.0.0.2", 00:18:33.777 "adrfam": "ipv4", 00:18:33.777 "trsvcid": "4420", 00:18:33.777 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:33.777 "prchk_reftag": false, 00:18:33.777 "prchk_guard": false, 00:18:33.777 "hdgst": false, 00:18:33.777 "ddgst": false, 00:18:33.777 "dhchap_key": "key1", 00:18:33.777 "dhchap_ctrlr_key": "ckey1", 00:18:33.777 "method": "bdev_nvme_attach_controller", 00:18:33.777 "req_id": 1 00:18:33.777 } 00:18:33.777 Got JSON-RPC error response 00:18:33.777 response: 00:18:33.777 { 00:18:33.777 "code": -5, 00:18:33.777 "message": "Input/output error" 00:18:33.777 } 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 20805 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 20805 ']' 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 20805 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 20805 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 20805' 00:18:34.035 killing process with pid 20805 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 20805 00:18:34.035 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 20805 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=42696 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 42696 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 42696 ']' 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.292 15:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 42696 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 42696 ']' 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.550 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.807 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.808 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.808 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.736 00:18:35.736 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.736 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.736 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.992 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.992 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.992 15:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.992 15:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.992 15:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.992 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.992 { 00:18:35.993 "cntlid": 1, 00:18:35.993 "qid": 0, 00:18:35.993 "state": "enabled", 00:18:35.993 "thread": "nvmf_tgt_poll_group_000", 00:18:35.993 "listen_address": { 00:18:35.993 "trtype": "TCP", 00:18:35.993 "adrfam": "IPv4", 00:18:35.993 "traddr": "10.0.0.2", 00:18:35.993 "trsvcid": "4420" 00:18:35.993 }, 00:18:35.993 "peer_address": { 00:18:35.993 "trtype": "TCP", 00:18:35.993 "adrfam": "IPv4", 00:18:35.993 "traddr": "10.0.0.1", 00:18:35.993 "trsvcid": "47098" 00:18:35.993 }, 00:18:35.993 "auth": { 00:18:35.993 "state": "completed", 00:18:35.993 "digest": "sha512", 00:18:35.993 "dhgroup": "ffdhe8192" 00:18:35.993 } 00:18:35.993 } 00:18:35.993 ]' 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.993 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.249 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTAyZGJmYjQxYWJiYTk3ZTllMTBkNjNmZDIyYjgxMWFlNDU2MWVlNzUyNjMyMzRjMDVkMDkyMmI4MDhmMTc2Mg64LoU=: 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:37.181 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.438 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.695 request: 00:18:37.695 { 00:18:37.695 "name": "nvme0", 00:18:37.695 "trtype": "tcp", 00:18:37.695 "traddr": "10.0.0.2", 00:18:37.695 "adrfam": "ipv4", 00:18:37.695 "trsvcid": "4420", 00:18:37.695 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:37.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:37.695 "prchk_reftag": false, 00:18:37.695 "prchk_guard": false, 00:18:37.695 "hdgst": false, 00:18:37.695 "ddgst": false, 00:18:37.695 "dhchap_key": "key3", 00:18:37.695 "method": "bdev_nvme_attach_controller", 00:18:37.695 "req_id": 1 00:18:37.695 } 00:18:37.695 Got JSON-RPC error response 00:18:37.695 response: 00:18:37.695 { 00:18:37.695 "code": -5, 00:18:37.695 "message": "Input/output error" 00:18:37.696 } 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:37.696 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.953 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:38.211 request: 00:18:38.211 { 00:18:38.211 "name": "nvme0", 00:18:38.211 "trtype": "tcp", 00:18:38.211 "traddr": "10.0.0.2", 00:18:38.211 "adrfam": "ipv4", 00:18:38.211 "trsvcid": "4420", 00:18:38.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:38.211 "prchk_reftag": false, 00:18:38.211 "prchk_guard": false, 00:18:38.211 "hdgst": false, 00:18:38.211 "ddgst": false, 00:18:38.211 "dhchap_key": "key3", 00:18:38.211 "method": "bdev_nvme_attach_controller", 00:18:38.211 "req_id": 1 00:18:38.211 } 00:18:38.211 Got JSON-RPC error response 00:18:38.211 response: 00:18:38.211 { 00:18:38.211 "code": -5, 00:18:38.211 "message": "Input/output error" 00:18:38.211 } 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.211 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.468 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:38.725 request: 00:18:38.725 { 00:18:38.725 "name": "nvme0", 00:18:38.725 "trtype": "tcp", 00:18:38.725 "traddr": "10.0.0.2", 00:18:38.725 "adrfam": "ipv4", 00:18:38.725 "trsvcid": "4420", 00:18:38.725 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:38.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:38.725 "prchk_reftag": false, 00:18:38.725 "prchk_guard": false, 00:18:38.725 "hdgst": false, 00:18:38.725 "ddgst": false, 00:18:38.725 "dhchap_key": "key0", 00:18:38.725 "dhchap_ctrlr_key": "key1", 00:18:38.725 "method": "bdev_nvme_attach_controller", 00:18:38.725 "req_id": 1 00:18:38.725 } 00:18:38.725 Got JSON-RPC error response 00:18:38.725 response: 00:18:38.725 { 00:18:38.725 "code": -5, 00:18:38.725 "message": "Input/output error" 00:18:38.725 } 00:18:38.725 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:38.725 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:38.725 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:38.725 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:38.725 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:38.725 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:38.982 00:18:38.982 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:38.982 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.982 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:39.239 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.239 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.239 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 20835 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 20835 ']' 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 20835 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:39.497 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 20835 00:18:39.754 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:39.754 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:39.754 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 20835' 00:18:39.754 killing process with pid 20835 00:18:39.754 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 20835 00:18:39.754 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 20835 00:18:40.012 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:40.012 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.012 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:40.012 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.012 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:40.012 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.012 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.012 rmmod nvme_tcp 00:18:40.012 rmmod nvme_fabrics 00:18:40.012 rmmod nvme_keyring 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 42696 ']' 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 42696 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 42696 ']' 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 42696 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 42696 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 42696' 00:18:40.269 killing process with pid 42696 00:18:40.269 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 42696 00:18:40.270 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 42696 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.528 15:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.431 15:55:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.431 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.IUW /tmp/spdk.key-sha256.7uO /tmp/spdk.key-sha384.pLf /tmp/spdk.key-sha512.8nx /tmp/spdk.key-sha512.POG /tmp/spdk.key-sha384.eHD /tmp/spdk.key-sha256.ULk '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:42.431 00:18:42.431 real 3m2.541s 00:18:42.431 user 7m6.662s 00:18:42.431 sys 0m25.370s 00:18:42.431 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.431 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.431 ************************************ 00:18:42.431 END TEST nvmf_auth_target 00:18:42.431 ************************************ 00:18:42.431 15:55:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:42.431 15:55:12 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:42.431 15:55:12 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:42.431 15:55:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:42.431 15:55:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.431 15:55:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.431 ************************************ 00:18:42.431 START TEST nvmf_bdevio_no_huge 00:18:42.431 ************************************ 00:18:42.431 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:42.689 * Looking for test storage... 00:18:42.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.689 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.690 15:55:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:44.586 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:44.587 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:44.587 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:44.587 Found net devices under 0000:09:00.0: cvl_0_0 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:44.587 Found net devices under 0000:09:00.1: cvl_0_1 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:44.587 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:44.844 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:44.844 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:44.844 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:44.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:44.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:18:44.845 00:18:44.845 --- 10.0.0.2 ping statistics --- 00:18:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.845 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:44.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:44.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:18:44.845 00:18:44.845 --- 10.0.0.1 ping statistics --- 00:18:44.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:44.845 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=45378 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 45378 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 45378 ']' 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.845 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:44.845 [2024-07-12 15:55:14.441086] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:18:44.845 [2024-07-12 15:55:14.441175] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:44.845 [2024-07-12 15:55:14.513952] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.102 [2024-07-12 15:55:14.619293] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.102 [2024-07-12 15:55:14.619365] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.102 [2024-07-12 15:55:14.619379] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.102 [2024-07-12 15:55:14.619392] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.102 [2024-07-12 15:55:14.619406] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.102 [2024-07-12 15:55:14.619491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:45.102 [2024-07-12 15:55:14.619553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:45.102 [2024-07-12 15:55:14.619603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:45.102 [2024-07-12 15:55:14.619606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.102 [2024-07-12 15:55:14.746916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.102 Malloc0 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.102 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:45.103 [2024-07-12 15:55:14.785082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:45.103 { 00:18:45.103 "params": { 00:18:45.103 "name": "Nvme$subsystem", 00:18:45.103 "trtype": "$TEST_TRANSPORT", 00:18:45.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:45.103 "adrfam": "ipv4", 00:18:45.103 "trsvcid": "$NVMF_PORT", 00:18:45.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:45.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:45.103 "hdgst": ${hdgst:-false}, 00:18:45.103 "ddgst": ${ddgst:-false} 00:18:45.103 }, 00:18:45.103 "method": "bdev_nvme_attach_controller" 00:18:45.103 } 00:18:45.103 EOF 00:18:45.103 )") 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:45.103 15:55:14 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:45.103 "params": { 00:18:45.103 "name": "Nvme1", 00:18:45.103 "trtype": "tcp", 00:18:45.103 "traddr": "10.0.0.2", 00:18:45.103 "adrfam": "ipv4", 00:18:45.103 "trsvcid": "4420", 00:18:45.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:45.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.103 "hdgst": false, 00:18:45.103 "ddgst": false 00:18:45.103 }, 00:18:45.103 "method": "bdev_nvme_attach_controller" 00:18:45.103 }' 00:18:45.360 [2024-07-12 15:55:14.833235] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:18:45.360 [2024-07-12 15:55:14.833350] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid45491 ] 00:18:45.360 [2024-07-12 15:55:14.895089] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:45.360 [2024-07-12 15:55:15.011253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.360 [2024-07-12 15:55:15.011303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.360 [2024-07-12 15:55:15.011306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.617 I/O targets: 00:18:45.617 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:45.617 00:18:45.617 00:18:45.617 CUnit - A unit testing framework for C - Version 2.1-3 00:18:45.617 http://cunit.sourceforge.net/ 00:18:45.617 00:18:45.617 00:18:45.617 Suite: bdevio tests on: Nvme1n1 00:18:45.617 Test: blockdev write read block ...passed 00:18:45.617 Test: blockdev write zeroes read block ...passed 00:18:45.617 Test: blockdev write zeroes read no split ...passed 00:18:45.617 Test: blockdev write zeroes read split ...passed 00:18:45.617 Test: blockdev write zeroes read split partial ...passed 00:18:45.617 Test: blockdev reset ...[2024-07-12 15:55:15.345924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.617 [2024-07-12 15:55:15.346036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb2100 (9): Bad file descriptor 00:18:45.875 [2024-07-12 15:55:15.359197] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:45.875 passed 00:18:45.875 Test: blockdev write read 8 blocks ...passed 00:18:45.875 Test: blockdev write read size > 128k ...passed 00:18:45.875 Test: blockdev write read invalid size ...passed 00:18:45.875 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:45.875 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:45.875 Test: blockdev write read max offset ...passed 00:18:45.875 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:45.875 Test: blockdev writev readv 8 blocks ...passed 00:18:45.875 Test: blockdev writev readv 30 x 1block ...passed 00:18:45.875 Test: blockdev writev readv block ...passed 00:18:45.875 Test: blockdev writev readv size > 128k ...passed 00:18:45.875 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:45.875 Test: blockdev comparev and writev ...[2024-07-12 15:55:15.533383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.533419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:45.875 [2024-07-12 15:55:15.533442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.533459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:45.875 [2024-07-12 15:55:15.533809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.533834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:45.875 [2024-07-12 15:55:15.533864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.533881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:45.875 [2024-07-12 15:55:15.534203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.534226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:45.875 [2024-07-12 15:55:15.534247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.534263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:45.875 [2024-07-12 15:55:15.534604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.534628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:45.875 [2024-07-12 15:55:15.534648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:45.875 [2024-07-12 15:55:15.534663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:45.875 passed 00:18:46.165 Test: blockdev nvme passthru rw ...passed 00:18:46.165 Test: blockdev nvme passthru vendor specific ...[2024-07-12 15:55:15.617645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.165 [2024-07-12 15:55:15.617674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:46.165 [2024-07-12 15:55:15.617865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.165 [2024-07-12 15:55:15.617888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:46.165 [2024-07-12 15:55:15.618070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.165 [2024-07-12 15:55:15.618093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:46.165 [2024-07-12 15:55:15.618273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:46.165 [2024-07-12 15:55:15.618295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:46.165 passed 00:18:46.165 Test: blockdev nvme admin passthru ...passed 00:18:46.165 Test: blockdev copy ...passed 00:18:46.165 00:18:46.165 Run Summary: Type Total Ran Passed Failed Inactive 00:18:46.165 suites 1 1 n/a 0 0 00:18:46.165 tests 23 23 23 0 0 00:18:46.165 asserts 152 152 152 0 n/a 00:18:46.165 00:18:46.165 Elapsed time = 0.918 seconds 00:18:46.423 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:46.424 rmmod nvme_tcp 00:18:46.424 rmmod nvme_fabrics 00:18:46.424 rmmod nvme_keyring 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 45378 ']' 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 45378 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 45378 ']' 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 45378 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 45378 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45378' 00:18:46.424 killing process with pid 45378 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 45378 00:18:46.424 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 45378 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.990 15:55:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.895 15:55:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.895 00:18:48.895 real 0m6.438s 00:18:48.895 user 0m9.817s 00:18:48.895 sys 0m2.584s 00:18:48.895 15:55:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:48.895 15:55:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.895 ************************************ 00:18:48.895 END TEST nvmf_bdevio_no_huge 00:18:48.895 ************************************ 00:18:48.895 15:55:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:48.895 15:55:18 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:48.895 15:55:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:48.895 15:55:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.895 15:55:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.895 ************************************ 00:18:48.895 START TEST nvmf_tls 00:18:48.895 ************************************ 00:18:48.895 15:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:49.155 * Looking for test storage... 00:18:49.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.155 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:49.156 15:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:51.058 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:51.058 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:51.058 Found net devices under 0000:09:00.0: cvl_0_0 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:51.058 Found net devices under 0000:09:00.1: cvl_0_1 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:51.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:51.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:18:51.058 00:18:51.058 --- 10.0.0.2 ping statistics --- 00:18:51.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.058 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:51.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:51.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:18:51.058 00:18:51.058 --- 10.0.0.1 ping statistics --- 00:18:51.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:51.058 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:51.058 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=47565 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 47565 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 47565 ']' 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:51.315 15:55:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.315 [2024-07-12 15:55:20.852046] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:18:51.315 [2024-07-12 15:55:20.852135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:51.315 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.315 [2024-07-12 15:55:20.915862] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.315 [2024-07-12 15:55:21.018960] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:51.315 [2024-07-12 15:55:21.019011] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:51.315 [2024-07-12 15:55:21.019041] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:51.315 [2024-07-12 15:55:21.019052] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:51.315 [2024-07-12 15:55:21.019061] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:51.315 [2024-07-12 15:55:21.019086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:51.572 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:51.829 true 00:18:51.829 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:51.829 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:52.086 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:52.086 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:52.086 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:52.343 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.343 15:55:21 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:52.601 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:52.601 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:52.601 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:52.859 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:52.859 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:53.116 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:53.117 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:53.117 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.117 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:53.375 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:53.375 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:53.375 15:55:22 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:53.632 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:53.632 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:53.889 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:53.889 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:53.889 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:54.147 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.147 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:54.404 15:55:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.G6NgJAOINq 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6ifK2kMWNP 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.G6NgJAOINq 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6ifK2kMWNP 00:18:54.405 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:54.662 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:54.920 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.G6NgJAOINq 00:18:54.920 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G6NgJAOINq 00:18:54.920 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:55.177 [2024-07-12 15:55:24.855232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.177 15:55:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:55.435 15:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:55.693 [2024-07-12 15:55:25.352618] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.693 [2024-07-12 15:55:25.352852] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.693 15:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.952 malloc0 00:18:55.952 15:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:56.516 15:55:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G6NgJAOINq 00:18:56.516 [2024-07-12 15:55:26.201876] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:56.516 15:55:26 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.G6NgJAOINq 00:18:56.516 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.705 Initializing NVMe Controllers 00:19:08.705 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:08.705 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:08.705 Initialization complete. Launching workers. 00:19:08.705 ======================================================== 00:19:08.705 Latency(us) 00:19:08.705 Device Information : IOPS MiB/s Average min max 00:19:08.705 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8550.39 33.40 7487.15 1052.70 9057.43 00:19:08.705 ======================================================== 00:19:08.705 Total : 8550.39 33.40 7487.15 1052.70 9057.43 00:19:08.705 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G6NgJAOINq 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G6NgJAOINq' 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=49452 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 49452 /var/tmp/bdevperf.sock 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 49452 ']' 00:19:08.705 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.706 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.706 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.706 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.706 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.706 [2024-07-12 15:55:36.366065] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:08.706 [2024-07-12 15:55:36.366134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid49452 ] 00:19:08.706 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.706 [2024-07-12 15:55:36.426198] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.706 [2024-07-12 15:55:36.537656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.706 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.706 15:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:08.706 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G6NgJAOINq 00:19:08.706 [2024-07-12 15:55:36.920385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.706 [2024-07-12 15:55:36.920523] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:08.706 TLSTESTn1 00:19:08.706 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:08.706 Running I/O for 10 seconds... 00:19:18.699 00:19:18.699 Latency(us) 00:19:18.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.699 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.699 Verification LBA range: start 0x0 length 0x2000 00:19:18.699 TLSTESTn1 : 10.05 2699.87 10.55 0.00 0.00 47283.69 7524.50 71458.51 00:19:18.699 =================================================================================================================== 00:19:18.699 Total : 2699.87 10.55 0.00 0.00 47283.69 7524.50 71458.51 00:19:18.699 0 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 49452 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 49452 ']' 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 49452 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 49452 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49452' 00:19:18.699 killing process with pid 49452 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 49452 00:19:18.699 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.699 00:19:18.699 Latency(us) 00:19:18.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.699 =================================================================================================================== 00:19:18.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.699 [2024-07-12 15:55:47.242630] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 49452 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ifK2kMWNP 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ifK2kMWNP 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ifK2kMWNP 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6ifK2kMWNP' 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=50664 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 50664 /var/tmp/bdevperf.sock 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 50664 ']' 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.699 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.700 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.700 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.700 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.700 [2024-07-12 15:55:47.566592] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:18.700 [2024-07-12 15:55:47.566713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50664 ] 00:19:18.700 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.700 [2024-07-12 15:55:47.629188] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.700 [2024-07-12 15:55:47.737060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.700 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.700 15:55:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:18.700 15:55:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6ifK2kMWNP 00:19:18.700 [2024-07-12 15:55:48.079407] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.700 [2024-07-12 15:55:48.079538] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:18.700 [2024-07-12 15:55:48.085178] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:18.700 [2024-07-12 15:55:48.085703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x924150 (107): Transport endpoint is not connected 00:19:18.700 [2024-07-12 15:55:48.086691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x924150 (9): Bad file descriptor 00:19:18.700 [2024-07-12 15:55:48.087690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.700 [2024-07-12 15:55:48.087716] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:18.700 [2024-07-12 15:55:48.087743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.700 request: 00:19:18.700 { 00:19:18.700 "name": "TLSTEST", 00:19:18.700 "trtype": "tcp", 00:19:18.700 "traddr": "10.0.0.2", 00:19:18.700 "adrfam": "ipv4", 00:19:18.700 "trsvcid": "4420", 00:19:18.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.700 "prchk_reftag": false, 00:19:18.700 "prchk_guard": false, 00:19:18.700 "hdgst": false, 00:19:18.700 "ddgst": false, 00:19:18.700 "psk": "/tmp/tmp.6ifK2kMWNP", 00:19:18.700 "method": "bdev_nvme_attach_controller", 00:19:18.700 "req_id": 1 00:19:18.700 } 00:19:18.700 Got JSON-RPC error response 00:19:18.700 response: 00:19:18.700 { 00:19:18.700 "code": -5, 00:19:18.700 "message": "Input/output error" 00:19:18.700 } 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 50664 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 50664 ']' 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 50664 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 50664 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50664' 00:19:18.700 killing process with pid 50664 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 50664 00:19:18.700 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.700 00:19:18.700 Latency(us) 00:19:18.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.700 =================================================================================================================== 00:19:18.700 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.700 [2024-07-12 15:55:48.139591] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 50664 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G6NgJAOINq 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G6NgJAOINq 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G6NgJAOINq 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G6NgJAOINq' 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=50791 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 50791 /var/tmp/bdevperf.sock 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 50791 ']' 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.700 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.959 [2024-07-12 15:55:48.450399] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:18.959 [2024-07-12 15:55:48.450475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50791 ] 00:19:18.959 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.959 [2024-07-12 15:55:48.507414] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.959 [2024-07-12 15:55:48.610920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.216 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.216 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:19.216 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.G6NgJAOINq 00:19:19.474 [2024-07-12 15:55:48.947909] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.474 [2024-07-12 15:55:48.948045] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:19.474 [2024-07-12 15:55:48.956517] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:19.474 [2024-07-12 15:55:48.956546] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:19.474 [2024-07-12 15:55:48.956598] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:19.474 [2024-07-12 15:55:48.957159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a2150 (107): Transport endpoint is not connected 00:19:19.474 [2024-07-12 15:55:48.958150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a2150 (9): Bad file descriptor 00:19:19.474 [2024-07-12 15:55:48.959149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:19.474 [2024-07-12 15:55:48.959167] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:19.474 [2024-07-12 15:55:48.959194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:19.474 request: 00:19:19.474 { 00:19:19.474 "name": "TLSTEST", 00:19:19.474 "trtype": "tcp", 00:19:19.474 "traddr": "10.0.0.2", 00:19:19.474 "adrfam": "ipv4", 00:19:19.474 "trsvcid": "4420", 00:19:19.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.474 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:19.474 "prchk_reftag": false, 00:19:19.474 "prchk_guard": false, 00:19:19.474 "hdgst": false, 00:19:19.474 "ddgst": false, 00:19:19.474 "psk": "/tmp/tmp.G6NgJAOINq", 00:19:19.474 "method": "bdev_nvme_attach_controller", 00:19:19.474 "req_id": 1 00:19:19.474 } 00:19:19.474 Got JSON-RPC error response 00:19:19.474 response: 00:19:19.474 { 00:19:19.474 "code": -5, 00:19:19.474 "message": "Input/output error" 00:19:19.474 } 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 50791 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 50791 ']' 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 50791 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 50791 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50791' 00:19:19.474 killing process with pid 50791 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 50791 00:19:19.474 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.474 00:19:19.474 Latency(us) 00:19:19.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.474 =================================================================================================================== 00:19:19.474 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:19.474 [2024-07-12 15:55:49.000272] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:19.474 15:55:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 50791 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G6NgJAOINq 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G6NgJAOINq 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G6NgJAOINq 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G6NgJAOINq' 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=50930 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 50930 /var/tmp/bdevperf.sock 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 50930 ']' 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.732 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.732 [2024-07-12 15:55:49.279088] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:19.732 [2024-07-12 15:55:49.279167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid50930 ] 00:19:19.732 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.732 [2024-07-12 15:55:49.336735] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.732 [2024-07-12 15:55:49.448359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.990 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.990 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:19.990 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G6NgJAOINq 00:19:20.248 [2024-07-12 15:55:49.782790] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.248 [2024-07-12 15:55:49.782914] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:20.248 [2024-07-12 15:55:49.788735] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.248 [2024-07-12 15:55:49.788768] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:20.248 [2024-07-12 15:55:49.788823] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:20.248 [2024-07-12 15:55:49.789159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb150 (107): Transport endpoint is not connected 00:19:20.248 [2024-07-12 15:55:49.790149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bb150 (9): Bad file descriptor 00:19:20.248 [2024-07-12 15:55:49.791147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:20.248 [2024-07-12 15:55:49.791166] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:20.248 [2024-07-12 15:55:49.791193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:20.248 request: 00:19:20.248 { 00:19:20.248 "name": "TLSTEST", 00:19:20.248 "trtype": "tcp", 00:19:20.248 "traddr": "10.0.0.2", 00:19:20.248 "adrfam": "ipv4", 00:19:20.248 "trsvcid": "4420", 00:19:20.248 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:20.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.248 "prchk_reftag": false, 00:19:20.248 "prchk_guard": false, 00:19:20.248 "hdgst": false, 00:19:20.248 "ddgst": false, 00:19:20.248 "psk": "/tmp/tmp.G6NgJAOINq", 00:19:20.248 "method": "bdev_nvme_attach_controller", 00:19:20.248 "req_id": 1 00:19:20.248 } 00:19:20.248 Got JSON-RPC error response 00:19:20.248 response: 00:19:20.248 { 00:19:20.248 "code": -5, 00:19:20.248 "message": "Input/output error" 00:19:20.248 } 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 50930 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 50930 ']' 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 50930 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 50930 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:20.248 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:20.249 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50930' 00:19:20.249 killing process with pid 50930 00:19:20.249 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 50930 00:19:20.249 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.249 00:19:20.249 Latency(us) 00:19:20.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.249 =================================================================================================================== 00:19:20.249 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:20.249 [2024-07-12 15:55:49.839566] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:20.249 15:55:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 50930 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=51061 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 51061 /var/tmp/bdevperf.sock 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 51061 ']' 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.507 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.507 [2024-07-12 15:55:50.146026] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:20.507 [2024-07-12 15:55:50.146122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51061 ] 00:19:20.507 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.507 [2024-07-12 15:55:50.207934] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.765 [2024-07-12 15:55:50.320297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.765 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.765 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:20.765 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:21.022 [2024-07-12 15:55:50.671130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:21.022 [2024-07-12 15:55:50.673103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171c910 (9): Bad file descriptor 00:19:21.022 [2024-07-12 15:55:50.674099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:21.022 [2024-07-12 15:55:50.674119] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:21.022 [2024-07-12 15:55:50.674147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.022 request: 00:19:21.022 { 00:19:21.022 "name": "TLSTEST", 00:19:21.022 "trtype": "tcp", 00:19:21.022 "traddr": "10.0.0.2", 00:19:21.022 "adrfam": "ipv4", 00:19:21.022 "trsvcid": "4420", 00:19:21.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.022 "prchk_reftag": false, 00:19:21.022 "prchk_guard": false, 00:19:21.022 "hdgst": false, 00:19:21.022 "ddgst": false, 00:19:21.022 "method": "bdev_nvme_attach_controller", 00:19:21.022 "req_id": 1 00:19:21.022 } 00:19:21.022 Got JSON-RPC error response 00:19:21.022 response: 00:19:21.022 { 00:19:21.022 "code": -5, 00:19:21.022 "message": "Input/output error" 00:19:21.022 } 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 51061 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 51061 ']' 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 51061 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51061 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51061' 00:19:21.022 killing process with pid 51061 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 51061 00:19:21.022 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.022 00:19:21.022 Latency(us) 00:19:21.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.022 =================================================================================================================== 00:19:21.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.022 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 51061 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 47565 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 47565 ']' 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 47565 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.281 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 47565 00:19:21.538 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:21.538 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:21.538 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47565' 00:19:21.538 killing process with pid 47565 00:19:21.538 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 47565 00:19:21.538 [2024-07-12 15:55:51.012833] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:21.538 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 47565 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.XCAMLqHnav 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.XCAMLqHnav 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=51217 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 51217 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 51217 ']' 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.796 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.797 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.797 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.797 [2024-07-12 15:55:51.391716] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:21.797 [2024-07-12 15:55:51.391792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.797 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.797 [2024-07-12 15:55:51.458230] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.057 [2024-07-12 15:55:51.568245] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.057 [2024-07-12 15:55:51.568312] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.057 [2024-07-12 15:55:51.568334] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.057 [2024-07-12 15:55:51.568345] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.057 [2024-07-12 15:55:51.568355] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.057 [2024-07-12 15:55:51.568404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.XCAMLqHnav 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XCAMLqHnav 00:19:22.057 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:22.316 [2024-07-12 15:55:51.976619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.316 15:55:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:22.574 15:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:22.831 [2024-07-12 15:55:52.534147] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:22.831 [2024-07-12 15:55:52.534397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.831 15:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:23.088 malloc0 00:19:23.088 15:55:52 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:23.663 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCAMLqHnav 00:19:23.663 [2024-07-12 15:55:53.375804] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:23.920 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCAMLqHnav 00:19:23.920 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:23.920 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XCAMLqHnav' 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=51495 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 51495 /var/tmp/bdevperf.sock 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 51495 ']' 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.921 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.921 [2024-07-12 15:55:53.441738] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:23.921 [2024-07-12 15:55:53.441814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid51495 ] 00:19:23.921 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.921 [2024-07-12 15:55:53.498808] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.921 [2024-07-12 15:55:53.605552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.178 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.178 15:55:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:24.178 15:55:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCAMLqHnav 00:19:24.436 [2024-07-12 15:55:53.941215] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:24.436 [2024-07-12 15:55:53.941360] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:24.436 TLSTESTn1 00:19:24.436 15:55:54 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:24.436 Running I/O for 10 seconds... 00:19:36.625 00:19:36.625 Latency(us) 00:19:36.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.625 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:36.625 Verification LBA range: start 0x0 length 0x2000 00:19:36.625 TLSTESTn1 : 10.03 2793.93 10.91 0.00 0.00 45716.53 7378.87 68739.98 00:19:36.625 =================================================================================================================== 00:19:36.625 Total : 2793.93 10.91 0.00 0.00 45716.53 7378.87 68739.98 00:19:36.625 0 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 51495 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 51495 ']' 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 51495 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51495 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51495' 00:19:36.625 killing process with pid 51495 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 51495 00:19:36.625 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.625 00:19:36.625 Latency(us) 00:19:36.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.625 =================================================================================================================== 00:19:36.625 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:36.625 [2024-07-12 15:56:04.252765] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 51495 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.XCAMLqHnav 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCAMLqHnav 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCAMLqHnav 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XCAMLqHnav 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XCAMLqHnav' 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=52700 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 52700 /var/tmp/bdevperf.sock 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 52700 ']' 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.625 [2024-07-12 15:56:04.576354] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:36.625 [2024-07-12 15:56:04.576455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid52700 ] 00:19:36.625 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.625 [2024-07-12 15:56:04.633640] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.625 [2024-07-12 15:56:04.742624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:36.625 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCAMLqHnav 00:19:36.625 [2024-07-12 15:56:05.125328] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.625 [2024-07-12 15:56:05.125426] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:36.625 [2024-07-12 15:56:05.125441] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.XCAMLqHnav 00:19:36.625 request: 00:19:36.625 { 00:19:36.625 "name": "TLSTEST", 00:19:36.625 "trtype": "tcp", 00:19:36.625 "traddr": "10.0.0.2", 00:19:36.625 "adrfam": "ipv4", 00:19:36.625 "trsvcid": "4420", 00:19:36.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.625 "prchk_reftag": false, 00:19:36.625 "prchk_guard": false, 00:19:36.625 "hdgst": false, 00:19:36.625 "ddgst": false, 00:19:36.625 "psk": "/tmp/tmp.XCAMLqHnav", 00:19:36.625 "method": "bdev_nvme_attach_controller", 00:19:36.625 "req_id": 1 00:19:36.625 } 00:19:36.625 Got JSON-RPC error response 00:19:36.625 response: 00:19:36.625 { 00:19:36.625 "code": -1, 00:19:36.625 "message": "Operation not permitted" 00:19:36.625 } 00:19:36.625 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 52700 00:19:36.625 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 52700 ']' 00:19:36.625 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 52700 00:19:36.625 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 52700 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52700' 00:19:36.626 killing process with pid 52700 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 52700 00:19:36.626 Received shutdown signal, test time was about 10.000000 seconds 00:19:36.626 00:19:36.626 Latency(us) 00:19:36.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:36.626 =================================================================================================================== 00:19:36.626 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 52700 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 51217 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 51217 ']' 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 51217 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 51217 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51217' 00:19:36.626 killing process with pid 51217 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 51217 00:19:36.626 [2024-07-12 15:56:05.458947] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 51217 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=52851 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 52851 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 52851 ']' 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.626 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.626 [2024-07-12 15:56:05.795095] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:36.626 [2024-07-12 15:56:05.795185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.626 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.626 [2024-07-12 15:56:05.861751] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.626 [2024-07-12 15:56:05.968275] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.626 [2024-07-12 15:56:05.968345] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.626 [2024-07-12 15:56:05.968361] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.626 [2024-07-12 15:56:05.968374] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.626 [2024-07-12 15:56:05.968384] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.626 [2024-07-12 15:56:05.968426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.XCAMLqHnav 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.XCAMLqHnav 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.XCAMLqHnav 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XCAMLqHnav 00:19:36.626 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:36.883 [2024-07-12 15:56:06.388916] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.883 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.141 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.434 [2024-07-12 15:56:06.878166] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.434 [2024-07-12 15:56:06.878406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.434 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.691 malloc0 00:19:37.691 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.948 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCAMLqHnav 00:19:38.206 [2024-07-12 15:56:07.682697] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:38.206 [2024-07-12 15:56:07.682734] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:38.206 [2024-07-12 15:56:07.682781] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:38.206 request: 00:19:38.206 { 00:19:38.206 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.206 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.206 "psk": "/tmp/tmp.XCAMLqHnav", 00:19:38.206 "method": "nvmf_subsystem_add_host", 00:19:38.206 "req_id": 1 00:19:38.206 } 00:19:38.206 Got JSON-RPC error response 00:19:38.206 response: 00:19:38.206 { 00:19:38.206 "code": -32603, 00:19:38.206 "message": "Internal error" 00:19:38.206 } 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 52851 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 52851 ']' 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 52851 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 52851 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52851' 00:19:38.206 killing process with pid 52851 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 52851 00:19:38.206 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 52851 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.XCAMLqHnav 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=53147 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 53147 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 53147 ']' 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:38.464 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.464 [2024-07-12 15:56:08.042943] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:38.464 [2024-07-12 15:56:08.043031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.464 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.464 [2024-07-12 15:56:08.107698] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.722 [2024-07-12 15:56:08.216904] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.722 [2024-07-12 15:56:08.216972] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.722 [2024-07-12 15:56:08.216985] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.722 [2024-07-12 15:56:08.216996] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.722 [2024-07-12 15:56:08.217006] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.722 [2024-07-12 15:56:08.217041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.XCAMLqHnav 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XCAMLqHnav 00:19:38.722 15:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:38.979 [2024-07-12 15:56:08.585383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:38.979 15:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:39.236 15:56:08 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:39.494 [2024-07-12 15:56:09.167041] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.494 [2024-07-12 15:56:09.167281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.494 15:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:39.751 malloc0 00:19:40.009 15:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:40.267 15:56:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCAMLqHnav 00:19:40.524 [2024-07-12 15:56:10.060739] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=53432 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 53432 /var/tmp/bdevperf.sock 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 53432 ']' 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.524 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.524 [2024-07-12 15:56:10.127199] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:40.524 [2024-07-12 15:56:10.127283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53432 ] 00:19:40.524 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.524 [2024-07-12 15:56:10.184684] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.782 [2024-07-12 15:56:10.291215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.782 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.782 15:56:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:40.782 15:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCAMLqHnav 00:19:41.039 [2024-07-12 15:56:10.650591] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.039 [2024-07-12 15:56:10.650724] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:41.039 TLSTESTn1 00:19:41.039 15:56:10 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:41.605 15:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:41.605 "subsystems": [ 00:19:41.605 { 00:19:41.605 "subsystem": "keyring", 00:19:41.605 "config": [] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "iobuf", 00:19:41.605 "config": [ 00:19:41.605 { 00:19:41.605 "method": "iobuf_set_options", 00:19:41.605 "params": { 00:19:41.605 "small_pool_count": 8192, 00:19:41.605 "large_pool_count": 1024, 00:19:41.605 "small_bufsize": 8192, 00:19:41.605 "large_bufsize": 135168 00:19:41.605 } 00:19:41.605 } 00:19:41.605 ] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "sock", 00:19:41.605 "config": [ 00:19:41.605 { 00:19:41.605 "method": "sock_set_default_impl", 00:19:41.605 "params": { 00:19:41.605 "impl_name": "posix" 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "sock_impl_set_options", 00:19:41.605 "params": { 00:19:41.605 "impl_name": "ssl", 00:19:41.605 "recv_buf_size": 4096, 00:19:41.605 "send_buf_size": 4096, 00:19:41.605 "enable_recv_pipe": true, 00:19:41.605 "enable_quickack": false, 00:19:41.605 "enable_placement_id": 0, 00:19:41.605 "enable_zerocopy_send_server": true, 00:19:41.605 "enable_zerocopy_send_client": false, 00:19:41.605 "zerocopy_threshold": 0, 00:19:41.605 "tls_version": 0, 00:19:41.605 "enable_ktls": false 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "sock_impl_set_options", 00:19:41.605 "params": { 00:19:41.605 "impl_name": "posix", 00:19:41.605 "recv_buf_size": 2097152, 00:19:41.605 "send_buf_size": 2097152, 00:19:41.605 "enable_recv_pipe": true, 00:19:41.605 "enable_quickack": false, 00:19:41.605 "enable_placement_id": 0, 00:19:41.605 "enable_zerocopy_send_server": true, 00:19:41.605 "enable_zerocopy_send_client": false, 00:19:41.605 "zerocopy_threshold": 0, 00:19:41.605 "tls_version": 0, 00:19:41.605 "enable_ktls": false 00:19:41.605 } 00:19:41.605 } 00:19:41.605 ] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "vmd", 00:19:41.605 "config": [] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "accel", 00:19:41.605 "config": [ 00:19:41.605 { 00:19:41.605 "method": "accel_set_options", 00:19:41.605 "params": { 00:19:41.605 "small_cache_size": 128, 00:19:41.605 "large_cache_size": 16, 00:19:41.605 "task_count": 2048, 00:19:41.605 "sequence_count": 2048, 00:19:41.605 "buf_count": 2048 00:19:41.605 } 00:19:41.605 } 00:19:41.605 ] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "bdev", 00:19:41.605 "config": [ 00:19:41.605 { 00:19:41.605 "method": "bdev_set_options", 00:19:41.605 "params": { 00:19:41.605 "bdev_io_pool_size": 65535, 00:19:41.605 "bdev_io_cache_size": 256, 00:19:41.605 "bdev_auto_examine": true, 00:19:41.605 "iobuf_small_cache_size": 128, 00:19:41.605 "iobuf_large_cache_size": 16 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "bdev_raid_set_options", 00:19:41.605 "params": { 00:19:41.605 "process_window_size_kb": 1024 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "bdev_iscsi_set_options", 00:19:41.605 "params": { 00:19:41.605 "timeout_sec": 30 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "bdev_nvme_set_options", 00:19:41.605 "params": { 00:19:41.605 "action_on_timeout": "none", 00:19:41.605 "timeout_us": 0, 00:19:41.605 "timeout_admin_us": 0, 00:19:41.605 "keep_alive_timeout_ms": 10000, 00:19:41.605 "arbitration_burst": 0, 00:19:41.605 "low_priority_weight": 0, 00:19:41.605 "medium_priority_weight": 0, 00:19:41.605 "high_priority_weight": 0, 00:19:41.605 "nvme_adminq_poll_period_us": 10000, 00:19:41.605 "nvme_ioq_poll_period_us": 0, 00:19:41.605 "io_queue_requests": 0, 00:19:41.605 "delay_cmd_submit": true, 00:19:41.605 "transport_retry_count": 4, 00:19:41.605 "bdev_retry_count": 3, 00:19:41.605 "transport_ack_timeout": 0, 00:19:41.605 "ctrlr_loss_timeout_sec": 0, 00:19:41.605 "reconnect_delay_sec": 0, 00:19:41.605 "fast_io_fail_timeout_sec": 0, 00:19:41.605 "disable_auto_failback": false, 00:19:41.605 "generate_uuids": false, 00:19:41.605 "transport_tos": 0, 00:19:41.605 "nvme_error_stat": false, 00:19:41.605 "rdma_srq_size": 0, 00:19:41.605 "io_path_stat": false, 00:19:41.605 "allow_accel_sequence": false, 00:19:41.605 "rdma_max_cq_size": 0, 00:19:41.605 "rdma_cm_event_timeout_ms": 0, 00:19:41.605 "dhchap_digests": [ 00:19:41.605 "sha256", 00:19:41.605 "sha384", 00:19:41.605 "sha512" 00:19:41.605 ], 00:19:41.605 "dhchap_dhgroups": [ 00:19:41.605 "null", 00:19:41.605 "ffdhe2048", 00:19:41.605 "ffdhe3072", 00:19:41.605 "ffdhe4096", 00:19:41.605 "ffdhe6144", 00:19:41.605 "ffdhe8192" 00:19:41.605 ] 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "bdev_nvme_set_hotplug", 00:19:41.605 "params": { 00:19:41.605 "period_us": 100000, 00:19:41.605 "enable": false 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "bdev_malloc_create", 00:19:41.605 "params": { 00:19:41.605 "name": "malloc0", 00:19:41.605 "num_blocks": 8192, 00:19:41.605 "block_size": 4096, 00:19:41.605 "physical_block_size": 4096, 00:19:41.605 "uuid": "da5aef0e-84b0-4b1b-ad6c-38143f6816e1", 00:19:41.605 "optimal_io_boundary": 0 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "bdev_wait_for_examine" 00:19:41.605 } 00:19:41.605 ] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "nbd", 00:19:41.605 "config": [] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "scheduler", 00:19:41.605 "config": [ 00:19:41.605 { 00:19:41.605 "method": "framework_set_scheduler", 00:19:41.605 "params": { 00:19:41.605 "name": "static" 00:19:41.605 } 00:19:41.605 } 00:19:41.605 ] 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "subsystem": "nvmf", 00:19:41.605 "config": [ 00:19:41.605 { 00:19:41.605 "method": "nvmf_set_config", 00:19:41.605 "params": { 00:19:41.605 "discovery_filter": "match_any", 00:19:41.605 "admin_cmd_passthru": { 00:19:41.605 "identify_ctrlr": false 00:19:41.605 } 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "nvmf_set_max_subsystems", 00:19:41.605 "params": { 00:19:41.605 "max_subsystems": 1024 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "nvmf_set_crdt", 00:19:41.605 "params": { 00:19:41.605 "crdt1": 0, 00:19:41.605 "crdt2": 0, 00:19:41.605 "crdt3": 0 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "nvmf_create_transport", 00:19:41.605 "params": { 00:19:41.605 "trtype": "TCP", 00:19:41.605 "max_queue_depth": 128, 00:19:41.605 "max_io_qpairs_per_ctrlr": 127, 00:19:41.605 "in_capsule_data_size": 4096, 00:19:41.605 "max_io_size": 131072, 00:19:41.605 "io_unit_size": 131072, 00:19:41.605 "max_aq_depth": 128, 00:19:41.605 "num_shared_buffers": 511, 00:19:41.605 "buf_cache_size": 4294967295, 00:19:41.605 "dif_insert_or_strip": false, 00:19:41.605 "zcopy": false, 00:19:41.605 "c2h_success": false, 00:19:41.605 "sock_priority": 0, 00:19:41.605 "abort_timeout_sec": 1, 00:19:41.605 "ack_timeout": 0, 00:19:41.605 "data_wr_pool_size": 0 00:19:41.605 } 00:19:41.605 }, 00:19:41.605 { 00:19:41.605 "method": "nvmf_create_subsystem", 00:19:41.605 "params": { 00:19:41.605 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.605 "allow_any_host": false, 00:19:41.605 "serial_number": "SPDK00000000000001", 00:19:41.605 "model_number": "SPDK bdev Controller", 00:19:41.605 "max_namespaces": 10, 00:19:41.605 "min_cntlid": 1, 00:19:41.605 "max_cntlid": 65519, 00:19:41.605 "ana_reporting": false 00:19:41.605 } 00:19:41.605 }, 00:19:41.606 { 00:19:41.606 "method": "nvmf_subsystem_add_host", 00:19:41.606 "params": { 00:19:41.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.606 "host": "nqn.2016-06.io.spdk:host1", 00:19:41.606 "psk": "/tmp/tmp.XCAMLqHnav" 00:19:41.606 } 00:19:41.606 }, 00:19:41.606 { 00:19:41.606 "method": "nvmf_subsystem_add_ns", 00:19:41.606 "params": { 00:19:41.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.606 "namespace": { 00:19:41.606 "nsid": 1, 00:19:41.606 "bdev_name": "malloc0", 00:19:41.606 "nguid": "DA5AEF0E84B04B1BAD6C38143F6816E1", 00:19:41.606 "uuid": "da5aef0e-84b0-4b1b-ad6c-38143f6816e1", 00:19:41.606 "no_auto_visible": false 00:19:41.606 } 00:19:41.606 } 00:19:41.606 }, 00:19:41.606 { 00:19:41.606 "method": "nvmf_subsystem_add_listener", 00:19:41.606 "params": { 00:19:41.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.606 "listen_address": { 00:19:41.606 "trtype": "TCP", 00:19:41.606 "adrfam": "IPv4", 00:19:41.606 "traddr": "10.0.0.2", 00:19:41.606 "trsvcid": "4420" 00:19:41.606 }, 00:19:41.606 "secure_channel": true 00:19:41.606 } 00:19:41.606 } 00:19:41.606 ] 00:19:41.606 } 00:19:41.606 ] 00:19:41.606 }' 00:19:41.606 15:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:41.864 15:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:41.864 "subsystems": [ 00:19:41.864 { 00:19:41.864 "subsystem": "keyring", 00:19:41.864 "config": [] 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "subsystem": "iobuf", 00:19:41.864 "config": [ 00:19:41.864 { 00:19:41.864 "method": "iobuf_set_options", 00:19:41.864 "params": { 00:19:41.864 "small_pool_count": 8192, 00:19:41.864 "large_pool_count": 1024, 00:19:41.864 "small_bufsize": 8192, 00:19:41.864 "large_bufsize": 135168 00:19:41.864 } 00:19:41.864 } 00:19:41.864 ] 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "subsystem": "sock", 00:19:41.864 "config": [ 00:19:41.864 { 00:19:41.864 "method": "sock_set_default_impl", 00:19:41.864 "params": { 00:19:41.864 "impl_name": "posix" 00:19:41.864 } 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "method": "sock_impl_set_options", 00:19:41.864 "params": { 00:19:41.864 "impl_name": "ssl", 00:19:41.864 "recv_buf_size": 4096, 00:19:41.864 "send_buf_size": 4096, 00:19:41.864 "enable_recv_pipe": true, 00:19:41.864 "enable_quickack": false, 00:19:41.864 "enable_placement_id": 0, 00:19:41.864 "enable_zerocopy_send_server": true, 00:19:41.864 "enable_zerocopy_send_client": false, 00:19:41.864 "zerocopy_threshold": 0, 00:19:41.864 "tls_version": 0, 00:19:41.864 "enable_ktls": false 00:19:41.864 } 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "method": "sock_impl_set_options", 00:19:41.864 "params": { 00:19:41.864 "impl_name": "posix", 00:19:41.864 "recv_buf_size": 2097152, 00:19:41.864 "send_buf_size": 2097152, 00:19:41.864 "enable_recv_pipe": true, 00:19:41.864 "enable_quickack": false, 00:19:41.864 "enable_placement_id": 0, 00:19:41.864 "enable_zerocopy_send_server": true, 00:19:41.864 "enable_zerocopy_send_client": false, 00:19:41.864 "zerocopy_threshold": 0, 00:19:41.864 "tls_version": 0, 00:19:41.864 "enable_ktls": false 00:19:41.864 } 00:19:41.864 } 00:19:41.864 ] 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "subsystem": "vmd", 00:19:41.864 "config": [] 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "subsystem": "accel", 00:19:41.864 "config": [ 00:19:41.864 { 00:19:41.864 "method": "accel_set_options", 00:19:41.864 "params": { 00:19:41.864 "small_cache_size": 128, 00:19:41.864 "large_cache_size": 16, 00:19:41.864 "task_count": 2048, 00:19:41.864 "sequence_count": 2048, 00:19:41.864 "buf_count": 2048 00:19:41.864 } 00:19:41.864 } 00:19:41.864 ] 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "subsystem": "bdev", 00:19:41.864 "config": [ 00:19:41.864 { 00:19:41.864 "method": "bdev_set_options", 00:19:41.864 "params": { 00:19:41.864 "bdev_io_pool_size": 65535, 00:19:41.864 "bdev_io_cache_size": 256, 00:19:41.864 "bdev_auto_examine": true, 00:19:41.864 "iobuf_small_cache_size": 128, 00:19:41.864 "iobuf_large_cache_size": 16 00:19:41.864 } 00:19:41.864 }, 00:19:41.864 { 00:19:41.864 "method": "bdev_raid_set_options", 00:19:41.864 "params": { 00:19:41.864 "process_window_size_kb": 1024 00:19:41.864 } 00:19:41.864 }, 00:19:41.864 { 00:19:41.865 "method": "bdev_iscsi_set_options", 00:19:41.865 "params": { 00:19:41.865 "timeout_sec": 30 00:19:41.865 } 00:19:41.865 }, 00:19:41.865 { 00:19:41.865 "method": "bdev_nvme_set_options", 00:19:41.865 "params": { 00:19:41.865 "action_on_timeout": "none", 00:19:41.865 "timeout_us": 0, 00:19:41.865 "timeout_admin_us": 0, 00:19:41.865 "keep_alive_timeout_ms": 10000, 00:19:41.865 "arbitration_burst": 0, 00:19:41.865 "low_priority_weight": 0, 00:19:41.865 "medium_priority_weight": 0, 00:19:41.865 "high_priority_weight": 0, 00:19:41.865 "nvme_adminq_poll_period_us": 10000, 00:19:41.865 "nvme_ioq_poll_period_us": 0, 00:19:41.865 "io_queue_requests": 512, 00:19:41.865 "delay_cmd_submit": true, 00:19:41.865 "transport_retry_count": 4, 00:19:41.865 "bdev_retry_count": 3, 00:19:41.865 "transport_ack_timeout": 0, 00:19:41.865 "ctrlr_loss_timeout_sec": 0, 00:19:41.865 "reconnect_delay_sec": 0, 00:19:41.865 "fast_io_fail_timeout_sec": 0, 00:19:41.865 "disable_auto_failback": false, 00:19:41.865 "generate_uuids": false, 00:19:41.865 "transport_tos": 0, 00:19:41.865 "nvme_error_stat": false, 00:19:41.865 "rdma_srq_size": 0, 00:19:41.865 "io_path_stat": false, 00:19:41.865 "allow_accel_sequence": false, 00:19:41.865 "rdma_max_cq_size": 0, 00:19:41.865 "rdma_cm_event_timeout_ms": 0, 00:19:41.865 "dhchap_digests": [ 00:19:41.865 "sha256", 00:19:41.865 "sha384", 00:19:41.865 "sha512" 00:19:41.865 ], 00:19:41.865 "dhchap_dhgroups": [ 00:19:41.865 "null", 00:19:41.865 "ffdhe2048", 00:19:41.865 "ffdhe3072", 00:19:41.865 "ffdhe4096", 00:19:41.865 "ffdhe6144", 00:19:41.865 "ffdhe8192" 00:19:41.865 ] 00:19:41.865 } 00:19:41.865 }, 00:19:41.865 { 00:19:41.865 "method": "bdev_nvme_attach_controller", 00:19:41.865 "params": { 00:19:41.865 "name": "TLSTEST", 00:19:41.865 "trtype": "TCP", 00:19:41.865 "adrfam": "IPv4", 00:19:41.865 "traddr": "10.0.0.2", 00:19:41.865 "trsvcid": "4420", 00:19:41.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.865 "prchk_reftag": false, 00:19:41.865 "prchk_guard": false, 00:19:41.865 "ctrlr_loss_timeout_sec": 0, 00:19:41.865 "reconnect_delay_sec": 0, 00:19:41.865 "fast_io_fail_timeout_sec": 0, 00:19:41.865 "psk": "/tmp/tmp.XCAMLqHnav", 00:19:41.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.865 "hdgst": false, 00:19:41.865 "ddgst": false 00:19:41.865 } 00:19:41.865 }, 00:19:41.865 { 00:19:41.865 "method": "bdev_nvme_set_hotplug", 00:19:41.865 "params": { 00:19:41.865 "period_us": 100000, 00:19:41.865 "enable": false 00:19:41.865 } 00:19:41.865 }, 00:19:41.865 { 00:19:41.865 "method": "bdev_wait_for_examine" 00:19:41.865 } 00:19:41.865 ] 00:19:41.865 }, 00:19:41.865 { 00:19:41.865 "subsystem": "nbd", 00:19:41.865 "config": [] 00:19:41.865 } 00:19:41.865 ] 00:19:41.865 }' 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 53432 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 53432 ']' 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 53432 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 53432 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53432' 00:19:41.865 killing process with pid 53432 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 53432 00:19:41.865 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.865 00:19:41.865 Latency(us) 00:19:41.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.865 =================================================================================================================== 00:19:41.865 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.865 [2024-07-12 15:56:11.393498] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:41.865 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 53432 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 53147 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 53147 ']' 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 53147 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 53147 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53147' 00:19:42.123 killing process with pid 53147 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 53147 00:19:42.123 [2024-07-12 15:56:11.681511] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:42.123 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 53147 00:19:42.381 15:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:42.381 15:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:42.381 15:56:11 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:42.381 "subsystems": [ 00:19:42.381 { 00:19:42.381 "subsystem": "keyring", 00:19:42.381 "config": [] 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "subsystem": "iobuf", 00:19:42.381 "config": [ 00:19:42.381 { 00:19:42.381 "method": "iobuf_set_options", 00:19:42.381 "params": { 00:19:42.381 "small_pool_count": 8192, 00:19:42.381 "large_pool_count": 1024, 00:19:42.381 "small_bufsize": 8192, 00:19:42.381 "large_bufsize": 135168 00:19:42.381 } 00:19:42.381 } 00:19:42.381 ] 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "subsystem": "sock", 00:19:42.381 "config": [ 00:19:42.381 { 00:19:42.381 "method": "sock_set_default_impl", 00:19:42.381 "params": { 00:19:42.381 "impl_name": "posix" 00:19:42.381 } 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "method": "sock_impl_set_options", 00:19:42.381 "params": { 00:19:42.381 "impl_name": "ssl", 00:19:42.381 "recv_buf_size": 4096, 00:19:42.381 "send_buf_size": 4096, 00:19:42.381 "enable_recv_pipe": true, 00:19:42.381 "enable_quickack": false, 00:19:42.381 "enable_placement_id": 0, 00:19:42.381 "enable_zerocopy_send_server": true, 00:19:42.381 "enable_zerocopy_send_client": false, 00:19:42.381 "zerocopy_threshold": 0, 00:19:42.381 "tls_version": 0, 00:19:42.381 "enable_ktls": false 00:19:42.381 } 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "method": "sock_impl_set_options", 00:19:42.381 "params": { 00:19:42.381 "impl_name": "posix", 00:19:42.381 "recv_buf_size": 2097152, 00:19:42.381 "send_buf_size": 2097152, 00:19:42.381 "enable_recv_pipe": true, 00:19:42.381 "enable_quickack": false, 00:19:42.381 "enable_placement_id": 0, 00:19:42.381 "enable_zerocopy_send_server": true, 00:19:42.381 "enable_zerocopy_send_client": false, 00:19:42.381 "zerocopy_threshold": 0, 00:19:42.381 "tls_version": 0, 00:19:42.381 "enable_ktls": false 00:19:42.381 } 00:19:42.381 } 00:19:42.381 ] 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "subsystem": "vmd", 00:19:42.381 "config": [] 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "subsystem": "accel", 00:19:42.381 "config": [ 00:19:42.381 { 00:19:42.381 "method": "accel_set_options", 00:19:42.381 "params": { 00:19:42.381 "small_cache_size": 128, 00:19:42.381 "large_cache_size": 16, 00:19:42.381 "task_count": 2048, 00:19:42.381 "sequence_count": 2048, 00:19:42.381 "buf_count": 2048 00:19:42.381 } 00:19:42.381 } 00:19:42.381 ] 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "subsystem": "bdev", 00:19:42.381 "config": [ 00:19:42.381 { 00:19:42.381 "method": "bdev_set_options", 00:19:42.381 "params": { 00:19:42.381 "bdev_io_pool_size": 65535, 00:19:42.381 "bdev_io_cache_size": 256, 00:19:42.381 "bdev_auto_examine": true, 00:19:42.381 "iobuf_small_cache_size": 128, 00:19:42.381 "iobuf_large_cache_size": 16 00:19:42.381 } 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "method": "bdev_raid_set_options", 00:19:42.381 "params": { 00:19:42.381 "process_window_size_kb": 1024 00:19:42.381 } 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "method": "bdev_iscsi_set_options", 00:19:42.381 "params": { 00:19:42.381 "timeout_sec": 30 00:19:42.381 } 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "method": "bdev_nvme_set_options", 00:19:42.381 "params": { 00:19:42.381 "action_on_timeout": "none", 00:19:42.381 "timeout_us": 0, 00:19:42.381 "timeout_admin_us": 0, 00:19:42.381 "keep_alive_timeout_ms": 10000, 00:19:42.381 "arbitration_burst": 0, 00:19:42.381 "low_priority_weight": 0, 00:19:42.381 "medium_priority_weight": 0, 00:19:42.381 "high_priority_weight": 0, 00:19:42.381 "nvme_adminq_poll_period_us": 10000, 00:19:42.381 "nvme_ioq_poll_period_us": 0, 00:19:42.381 "io_queue_requests": 0, 00:19:42.381 "delay_cmd_submit": true, 00:19:42.381 "transport_retry_count": 4, 00:19:42.381 "bdev_retry_count": 3, 00:19:42.381 "transport_ack_timeout": 0, 00:19:42.381 "ctrlr_loss_timeout_sec": 0, 00:19:42.381 "reconnect_delay_sec": 0, 00:19:42.381 "fast_io_fail_timeout_sec": 0, 00:19:42.381 "disable_auto_failback": false, 00:19:42.381 "generate_uuids": false, 00:19:42.381 "transport_tos": 0, 00:19:42.381 "nvme_error_stat": false, 00:19:42.381 "rdma_srq_size": 0, 00:19:42.381 "io_path_stat": false, 00:19:42.381 "allow_accel_sequence": false, 00:19:42.381 "rdma_max_cq_size": 0, 00:19:42.381 "rdma_cm_event_timeout_ms": 0, 00:19:42.381 "dhchap_digests": [ 00:19:42.381 "sha256", 00:19:42.381 "sha384", 00:19:42.381 "sha512" 00:19:42.381 ], 00:19:42.381 "dhchap_dhgroups": [ 00:19:42.381 "null", 00:19:42.381 "ffdhe2048", 00:19:42.381 "ffdhe3072", 00:19:42.381 "ffdhe4096", 00:19:42.381 "ffdhe 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:42.381 6144", 00:19:42.381 "ffdhe8192" 00:19:42.381 ] 00:19:42.381 } 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "method": "bdev_nvme_set_hotplug", 00:19:42.381 "params": { 00:19:42.381 "period_us": 100000, 00:19:42.381 "enable": false 00:19:42.381 } 00:19:42.381 }, 00:19:42.381 { 00:19:42.381 "method": "bdev_malloc_create", 00:19:42.381 "params": { 00:19:42.381 "name": "malloc0", 00:19:42.381 "num_blocks": 8192, 00:19:42.381 "block_size": 4096, 00:19:42.381 "physical_block_size": 4096, 00:19:42.381 "uuid": "da5aef0e-84b0-4b1b-ad6c-38143f6816e1", 00:19:42.381 "optimal_io_boundary": 0 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "bdev_wait_for_examine" 00:19:42.382 } 00:19:42.382 ] 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "subsystem": "nbd", 00:19:42.382 "config": [] 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "subsystem": "scheduler", 00:19:42.382 "config": [ 00:19:42.382 { 00:19:42.382 "method": "framework_set_scheduler", 00:19:42.382 "params": { 00:19:42.382 "name": "static" 00:19:42.382 } 00:19:42.382 } 00:19:42.382 ] 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "subsystem": "nvmf", 00:19:42.382 "config": [ 00:19:42.382 { 00:19:42.382 "method": "nvmf_set_config", 00:19:42.382 "params": { 00:19:42.382 "discovery_filter": "match_any", 00:19:42.382 "admin_cmd_passthru": { 00:19:42.382 "identify_ctrlr": false 00:19:42.382 } 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "nvmf_set_max_subsystems", 00:19:42.382 "params": { 00:19:42.382 "max_subsystems": 1024 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "nvmf_set_crdt", 00:19:42.382 "params": { 00:19:42.382 "crdt1": 0, 00:19:42.382 "crdt2": 0, 00:19:42.382 "crdt3": 0 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "nvmf_create_transport", 00:19:42.382 "params": { 00:19:42.382 "trtype": "TCP", 00:19:42.382 "max_queue_depth": 128, 00:19:42.382 "max_io_qpairs_per_ctrlr": 127, 00:19:42.382 "in_capsule_data_size": 4096, 00:19:42.382 "max_io_size": 131072, 00:19:42.382 "io_unit_size": 131072, 00:19:42.382 "max_aq_depth": 128, 00:19:42.382 "num_shared_buffers": 511, 00:19:42.382 "buf_cache_size": 4294967295, 00:19:42.382 "dif_insert_or_strip": false, 00:19:42.382 "zcopy": false, 00:19:42.382 "c2h_success": false, 00:19:42.382 "sock_priority": 0, 00:19:42.382 "abort_timeout_sec": 1, 00:19:42.382 "ack_timeout": 0, 00:19:42.382 "data_wr_pool_size": 0 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "nvmf_create_subsystem", 00:19:42.382 "params": { 00:19:42.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.382 "allow_any_host": false, 00:19:42.382 "serial_number": "SPDK00000000000001", 00:19:42.382 "model_number": "SPDK bdev Controller", 00:19:42.382 "max_namespaces": 10, 00:19:42.382 "min_cntlid": 1, 00:19:42.382 "max_cntlid": 65519, 00:19:42.382 "ana_reporting": false 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "nvmf_subsystem_add_host", 00:19:42.382 "params": { 00:19:42.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.382 "host": "nqn.2016-06.io.spdk:host1", 00:19:42.382 "psk": "/tmp/tmp.XCAMLqHnav" 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "nvmf_subsystem_add_ns", 00:19:42.382 "params": { 00:19:42.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.382 "namespace": { 00:19:42.382 "nsid": 1, 00:19:42.382 "bdev_name": "malloc0", 00:19:42.382 "nguid": "DA5AEF0E84B04B1BAD6C38143F6816E1", 00:19:42.382 "uuid": "da5aef0e-84b0-4b1b-ad6c-38143f6816e1", 00:19:42.382 "no_auto_visible": false 00:19:42.382 } 00:19:42.382 } 00:19:42.382 }, 00:19:42.382 { 00:19:42.382 "method": "nvmf_subsystem_add_listener", 00:19:42.382 "params": { 00:19:42.382 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.382 "listen_address": { 00:19:42.382 "trtype": "TCP", 00:19:42.382 "adrfam": "IPv4", 00:19:42.382 "traddr": "10.0.0.2", 00:19:42.382 "trsvcid": "4420" 00:19:42.382 }, 00:19:42.382 "secure_channel": true 00:19:42.382 } 00:19:42.382 } 00:19:42.382 ] 00:19:42.382 } 00:19:42.382 ] 00:19:42.382 }' 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=53706 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 53706 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 53706 ']' 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.382 15:56:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:42.382 [2024-07-12 15:56:12.014662] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:42.382 [2024-07-12 15:56:12.014768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:42.382 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.382 [2024-07-12 15:56:12.078079] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.641 [2024-07-12 15:56:12.187437] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.641 [2024-07-12 15:56:12.187488] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.641 [2024-07-12 15:56:12.187513] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:42.641 [2024-07-12 15:56:12.187525] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:42.641 [2024-07-12 15:56:12.187535] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.641 [2024-07-12 15:56:12.187626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.898 [2024-07-12 15:56:12.424911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.898 [2024-07-12 15:56:12.440870] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:42.898 [2024-07-12 15:56:12.456930] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.898 [2024-07-12 15:56:12.467456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.460 15:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.460 15:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.460 15:56:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:43.460 15:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:43.460 15:56:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=53859 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 53859 /var/tmp/bdevperf.sock 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 53859 ']' 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.460 15:56:13 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:43.460 "subsystems": [ 00:19:43.460 { 00:19:43.460 "subsystem": "keyring", 00:19:43.460 "config": [] 00:19:43.460 }, 00:19:43.460 { 00:19:43.460 "subsystem": "iobuf", 00:19:43.460 "config": [ 00:19:43.460 { 00:19:43.460 "method": "iobuf_set_options", 00:19:43.460 "params": { 00:19:43.460 "small_pool_count": 8192, 00:19:43.460 "large_pool_count": 1024, 00:19:43.460 "small_bufsize": 8192, 00:19:43.460 "large_bufsize": 135168 00:19:43.460 } 00:19:43.460 } 00:19:43.460 ] 00:19:43.460 }, 00:19:43.460 { 00:19:43.460 "subsystem": "sock", 00:19:43.460 "config": [ 00:19:43.460 { 00:19:43.460 "method": "sock_set_default_impl", 00:19:43.460 "params": { 00:19:43.460 "impl_name": "posix" 00:19:43.460 } 00:19:43.460 }, 00:19:43.460 { 00:19:43.460 "method": "sock_impl_set_options", 00:19:43.460 "params": { 00:19:43.460 "impl_name": "ssl", 00:19:43.460 "recv_buf_size": 4096, 00:19:43.460 "send_buf_size": 4096, 00:19:43.460 "enable_recv_pipe": true, 00:19:43.460 "enable_quickack": false, 00:19:43.460 "enable_placement_id": 0, 00:19:43.460 "enable_zerocopy_send_server": true, 00:19:43.460 "enable_zerocopy_send_client": false, 00:19:43.460 "zerocopy_threshold": 0, 00:19:43.460 "tls_version": 0, 00:19:43.460 "enable_ktls": false 00:19:43.460 } 00:19:43.460 }, 00:19:43.460 { 00:19:43.460 "method": "sock_impl_set_options", 00:19:43.460 "params": { 00:19:43.460 "impl_name": "posix", 00:19:43.460 "recv_buf_size": 2097152, 00:19:43.460 "send_buf_size": 2097152, 00:19:43.460 "enable_recv_pipe": true, 00:19:43.460 "enable_quickack": false, 00:19:43.460 "enable_placement_id": 0, 00:19:43.460 "enable_zerocopy_send_server": true, 00:19:43.460 "enable_zerocopy_send_client": false, 00:19:43.460 "zerocopy_threshold": 0, 00:19:43.460 "tls_version": 0, 00:19:43.460 "enable_ktls": false 00:19:43.460 } 00:19:43.460 } 00:19:43.460 ] 00:19:43.460 }, 00:19:43.460 { 00:19:43.460 "subsystem": "vmd", 00:19:43.460 "config": [] 00:19:43.460 }, 00:19:43.460 { 00:19:43.460 "subsystem": "accel", 00:19:43.460 "config": [ 00:19:43.460 { 00:19:43.460 "method": "accel_set_options", 00:19:43.460 "params": { 00:19:43.460 "small_cache_size": 128, 00:19:43.460 "large_cache_size": 16, 00:19:43.460 "task_count": 2048, 00:19:43.460 "sequence_count": 2048, 00:19:43.460 "buf_count": 2048 00:19:43.460 } 00:19:43.460 } 00:19:43.460 ] 00:19:43.460 }, 00:19:43.460 { 00:19:43.460 "subsystem": "bdev", 00:19:43.460 "config": [ 00:19:43.460 { 00:19:43.460 "method": "bdev_set_options", 00:19:43.460 "params": { 00:19:43.460 "bdev_io_pool_size": 65535, 00:19:43.460 "bdev_io_cache_size": 256, 00:19:43.460 "bdev_auto_examine": true, 00:19:43.460 "iobuf_small_cache_size": 128, 00:19:43.460 "iobuf_large_cache_size": 16 00:19:43.460 } 00:19:43.460 }, 00:19:43.461 { 00:19:43.461 "method": "bdev_raid_set_options", 00:19:43.461 "params": { 00:19:43.461 "process_window_size_kb": 1024 00:19:43.461 } 00:19:43.461 }, 00:19:43.461 { 00:19:43.461 "method": "bdev_iscsi_set_options", 00:19:43.461 "params": { 00:19:43.461 "timeout_sec": 30 00:19:43.461 } 00:19:43.461 }, 00:19:43.461 { 00:19:43.461 "method": "bdev_nvme_set_options", 00:19:43.461 "params": { 00:19:43.461 "action_on_timeout": "none", 00:19:43.461 "timeout_us": 0, 00:19:43.461 "timeout_admin_us": 0, 00:19:43.461 "keep_alive_timeout_ms": 10000, 00:19:43.461 "arbitration_burst": 0, 00:19:43.461 "low_priority_weight": 0, 00:19:43.461 "medium_priority_weight": 0, 00:19:43.461 "high_priority_weight": 0, 00:19:43.461 "nvme_adminq_poll_period_us": 10000, 00:19:43.461 "nvme_ioq_poll_period_us": 0, 00:19:43.461 "io_queue_requests": 512, 00:19:43.461 "delay_cmd_submit": true, 00:19:43.461 "transport_retry_count": 4, 00:19:43.461 "bdev_retry_count": 3, 00:19:43.461 "transport_ack_timeout": 0, 00:19:43.461 "ctrlr_loss_timeout_sec": 0, 00:19:43.461 "reconnect_delay_sec": 0, 00:19:43.461 "fast_io_fail_timeout_sec": 0, 00:19:43.461 "disable_auto_failback": false, 00:19:43.461 "generate_uuids": false, 00:19:43.461 "transport_tos": 0, 00:19:43.461 "nvme_error_stat": false, 00:19:43.461 "rdma_srq_size": 0, 00:19:43.461 "io_path_stat": false, 00:19:43.461 "allow_accel_sequence": false, 00:19:43.461 "rdma_max_cq_size": 0, 00:19:43.461 "rdma_cm_event_timeout_ms": 0, 00:19:43.461 "dhchap_digests": [ 00:19:43.461 "sha256", 00:19:43.461 "sha384", 00:19:43.461 "sha512" 00:19:43.461 ], 00:19:43.461 "dhchap_dhgroups": [ 00:19:43.461 "null", 00:19:43.461 "ffdhe2048", 00:19:43.461 "ffdhe3072", 00:19:43.461 "ffdhe4096", 00:19:43.461 "ffdhe6144", 00:19:43.461 "ffdhe8192" 00:19:43.461 ] 00:19:43.461 } 00:19:43.461 }, 00:19:43.461 { 00:19:43.461 "method": "bdev_nvme_attach_controller", 00:19:43.461 "params": { 00:19:43.461 "name": "TLSTEST", 00:19:43.461 "trtype": "TCP", 00:19:43.461 "adrfam": "IPv4", 00:19:43.461 "traddr": "10.0.0.2", 00:19:43.461 "trsvcid": "4420", 00:19:43.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.461 "prchk_reftag": false, 00:19:43.461 "prchk_guard": false, 00:19:43.461 "ctrlr_loss_timeout_sec": 0, 00:19:43.461 "reconnect_delay_sec": 0, 00:19:43.461 "fast_io_fail_timeout_sec": 0, 00:19:43.461 "psk": "/tmp/tmp.XCAMLqHnav", 00:19:43.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.461 "hdgst": false, 00:19:43.461 "ddgst": false 00:19:43.461 } 00:19:43.461 }, 00:19:43.461 { 00:19:43.461 "method": "bdev_nvme_set_hotplug", 00:19:43.461 "params": { 00:19:43.461 "period_us": 100000, 00:19:43.461 "enable": false 00:19:43.461 } 00:19:43.461 }, 00:19:43.461 { 00:19:43.461 "method": "bdev_wait_for_examine" 00:19:43.461 } 00:19:43.461 ] 00:19:43.461 }, 00:19:43.461 { 00:19:43.461 "subsystem": "nbd", 00:19:43.461 "config": [] 00:19:43.461 } 00:19:43.461 ] 00:19:43.461 }' 00:19:43.461 15:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.461 15:56:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.461 [2024-07-12 15:56:13.068156] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:43.461 [2024-07-12 15:56:13.068247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid53859 ] 00:19:43.461 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.461 [2024-07-12 15:56:13.124920] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.717 [2024-07-12 15:56:13.231697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.717 [2024-07-12 15:56:13.402867] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.717 [2024-07-12 15:56:13.402988] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:44.648 15:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.648 15:56:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:44.648 15:56:14 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:44.648 Running I/O for 10 seconds... 00:19:54.607 00:19:54.607 Latency(us) 00:19:54.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.607 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:54.607 Verification LBA range: start 0x0 length 0x2000 00:19:54.607 TLSTESTn1 : 10.08 1464.03 5.72 0.00 0.00 87082.72 11990.66 74565.40 00:19:54.607 =================================================================================================================== 00:19:54.607 Total : 1464.03 5.72 0.00 0.00 87082.72 11990.66 74565.40 00:19:54.607 0 00:19:54.607 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:54.607 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 53859 00:19:54.607 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 53859 ']' 00:19:54.607 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 53859 00:19:54.607 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:54.607 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.607 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 53859 00:19:54.608 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:54.608 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:54.608 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53859' 00:19:54.608 killing process with pid 53859 00:19:54.608 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 53859 00:19:54.608 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.608 00:19:54.608 Latency(us) 00:19:54.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.608 =================================================================================================================== 00:19:54.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.608 [2024-07-12 15:56:24.256377] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:54.608 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 53859 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 53706 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 53706 ']' 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 53706 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 53706 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53706' 00:19:54.865 killing process with pid 53706 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 53706 00:19:54.865 [2024-07-12 15:56:24.524018] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:54.865 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 53706 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=55191 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 55191 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 55191 ']' 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.125 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.125 [2024-07-12 15:56:24.851960] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:55.125 [2024-07-12 15:56:24.852067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.383 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.383 [2024-07-12 15:56:24.916048] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.383 [2024-07-12 15:56:25.015775] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.383 [2024-07-12 15:56:25.015828] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.383 [2024-07-12 15:56:25.015857] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.383 [2024-07-12 15:56:25.015869] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.383 [2024-07-12 15:56:25.015878] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.383 [2024-07-12 15:56:25.015905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.XCAMLqHnav 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.XCAMLqHnav 00:19:55.641 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.898 [2024-07-12 15:56:25.429169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.898 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:56.155 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:56.413 [2024-07-12 15:56:25.954601] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.413 [2024-07-12 15:56:25.954836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.413 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:56.670 malloc0 00:19:56.670 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:56.932 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XCAMLqHnav 00:19:57.238 [2024-07-12 15:56:26.695813] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=55468 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 55468 /var/tmp/bdevperf.sock 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 55468 ']' 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.238 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.238 [2024-07-12 15:56:26.758971] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:57.238 [2024-07-12 15:56:26.759051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55468 ] 00:19:57.238 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.238 [2024-07-12 15:56:26.816960] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.238 [2024-07-12 15:56:26.922063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.496 15:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:57.496 15:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:57.496 15:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XCAMLqHnav 00:19:57.754 15:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.011 [2024-07-12 15:56:27.507866] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.011 nvme0n1 00:19:58.011 15:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.011 Running I/O for 1 seconds... 00:19:59.379 00:19:59.379 Latency(us) 00:19:59.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.379 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.379 Verification LBA range: start 0x0 length 0x2000 00:19:59.379 nvme0n1 : 1.05 2576.43 10.06 0.00 0.00 48606.89 9806.13 68739.98 00:19:59.379 =================================================================================================================== 00:19:59.379 Total : 2576.43 10.06 0.00 0.00 48606.89 9806.13 68739.98 00:19:59.379 0 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 55468 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 55468 ']' 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 55468 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 55468 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55468' 00:19:59.379 killing process with pid 55468 00:19:59.379 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 55468 00:19:59.379 Received shutdown signal, test time was about 1.000000 seconds 00:19:59.379 00:19:59.379 Latency(us) 00:19:59.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.379 =================================================================================================================== 00:19:59.380 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.380 15:56:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 55468 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 55191 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 55191 ']' 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 55191 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 55191 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55191' 00:19:59.380 killing process with pid 55191 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 55191 00:19:59.380 [2024-07-12 15:56:29.082524] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:59.380 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 55191 00:19:59.637 15:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:59.637 15:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.637 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.637 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.893 15:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=55760 00:19:59.893 15:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:59.893 15:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 55760 00:19:59.893 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 55760 ']' 00:19:59.893 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.893 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.894 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.894 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.894 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.894 [2024-07-12 15:56:29.416352] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:19:59.894 [2024-07-12 15:56:29.416465] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.894 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.894 [2024-07-12 15:56:29.479097] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.894 [2024-07-12 15:56:29.584903] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:59.894 [2024-07-12 15:56:29.584952] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:59.894 [2024-07-12 15:56:29.584981] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:59.894 [2024-07-12 15:56:29.584992] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:59.894 [2024-07-12 15:56:29.585001] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:59.894 [2024-07-12 15:56:29.585031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.150 [2024-07-12 15:56:29.728349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.150 malloc0 00:20:00.150 [2024-07-12 15:56:29.759882] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.150 [2024-07-12 15:56:29.760089] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=55896 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 55896 /var/tmp/bdevperf.sock 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 55896 ']' 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.150 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.151 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.151 15:56:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.151 [2024-07-12 15:56:29.829042] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:20:00.151 [2024-07-12 15:56:29.829103] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid55896 ] 00:20:00.151 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.407 [2024-07-12 15:56:29.885885] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.407 [2024-07-12 15:56:29.990218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.407 15:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.407 15:56:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:00.407 15:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.XCAMLqHnav 00:20:00.971 15:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:00.971 [2024-07-12 15:56:30.623183] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.971 nvme0n1 00:20:01.228 15:56:30 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:01.228 Running I/O for 1 seconds... 00:20:02.159 00:20:02.159 Latency(us) 00:20:02.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.159 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:02.159 Verification LBA range: start 0x0 length 0x2000 00:20:02.159 nvme0n1 : 1.04 2601.81 10.16 0.00 0.00 48304.48 7718.68 85051.16 00:20:02.159 =================================================================================================================== 00:20:02.159 Total : 2601.81 10.16 0.00 0.00 48304.48 7718.68 85051.16 00:20:02.159 0 00:20:02.159 15:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:02.159 15:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.159 15:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.417 15:56:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.417 15:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:02.417 "subsystems": [ 00:20:02.417 { 00:20:02.417 "subsystem": "keyring", 00:20:02.417 "config": [ 00:20:02.417 { 00:20:02.417 "method": "keyring_file_add_key", 00:20:02.417 "params": { 00:20:02.417 "name": "key0", 00:20:02.417 "path": "/tmp/tmp.XCAMLqHnav" 00:20:02.417 } 00:20:02.417 } 00:20:02.417 ] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "iobuf", 00:20:02.417 "config": [ 00:20:02.417 { 00:20:02.417 "method": "iobuf_set_options", 00:20:02.417 "params": { 00:20:02.417 "small_pool_count": 8192, 00:20:02.417 "large_pool_count": 1024, 00:20:02.417 "small_bufsize": 8192, 00:20:02.417 "large_bufsize": 135168 00:20:02.417 } 00:20:02.417 } 00:20:02.417 ] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "sock", 00:20:02.417 "config": [ 00:20:02.417 { 00:20:02.417 "method": "sock_set_default_impl", 00:20:02.417 "params": { 00:20:02.417 "impl_name": "posix" 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "sock_impl_set_options", 00:20:02.417 "params": { 00:20:02.417 "impl_name": "ssl", 00:20:02.417 "recv_buf_size": 4096, 00:20:02.417 "send_buf_size": 4096, 00:20:02.417 "enable_recv_pipe": true, 00:20:02.417 "enable_quickack": false, 00:20:02.417 "enable_placement_id": 0, 00:20:02.417 "enable_zerocopy_send_server": true, 00:20:02.417 "enable_zerocopy_send_client": false, 00:20:02.417 "zerocopy_threshold": 0, 00:20:02.417 "tls_version": 0, 00:20:02.417 "enable_ktls": false 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "sock_impl_set_options", 00:20:02.417 "params": { 00:20:02.417 "impl_name": "posix", 00:20:02.417 "recv_buf_size": 2097152, 00:20:02.417 "send_buf_size": 2097152, 00:20:02.417 "enable_recv_pipe": true, 00:20:02.417 "enable_quickack": false, 00:20:02.417 "enable_placement_id": 0, 00:20:02.417 "enable_zerocopy_send_server": true, 00:20:02.417 "enable_zerocopy_send_client": false, 00:20:02.417 "zerocopy_threshold": 0, 00:20:02.417 "tls_version": 0, 00:20:02.417 "enable_ktls": false 00:20:02.417 } 00:20:02.417 } 00:20:02.417 ] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "vmd", 00:20:02.417 "config": [] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "accel", 00:20:02.417 "config": [ 00:20:02.417 { 00:20:02.417 "method": "accel_set_options", 00:20:02.417 "params": { 00:20:02.417 "small_cache_size": 128, 00:20:02.417 "large_cache_size": 16, 00:20:02.417 "task_count": 2048, 00:20:02.417 "sequence_count": 2048, 00:20:02.417 "buf_count": 2048 00:20:02.417 } 00:20:02.417 } 00:20:02.417 ] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "bdev", 00:20:02.417 "config": [ 00:20:02.417 { 00:20:02.417 "method": "bdev_set_options", 00:20:02.417 "params": { 00:20:02.417 "bdev_io_pool_size": 65535, 00:20:02.417 "bdev_io_cache_size": 256, 00:20:02.417 "bdev_auto_examine": true, 00:20:02.417 "iobuf_small_cache_size": 128, 00:20:02.417 "iobuf_large_cache_size": 16 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "bdev_raid_set_options", 00:20:02.417 "params": { 00:20:02.417 "process_window_size_kb": 1024 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "bdev_iscsi_set_options", 00:20:02.417 "params": { 00:20:02.417 "timeout_sec": 30 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "bdev_nvme_set_options", 00:20:02.417 "params": { 00:20:02.417 "action_on_timeout": "none", 00:20:02.417 "timeout_us": 0, 00:20:02.417 "timeout_admin_us": 0, 00:20:02.417 "keep_alive_timeout_ms": 10000, 00:20:02.417 "arbitration_burst": 0, 00:20:02.417 "low_priority_weight": 0, 00:20:02.417 "medium_priority_weight": 0, 00:20:02.417 "high_priority_weight": 0, 00:20:02.417 "nvme_adminq_poll_period_us": 10000, 00:20:02.417 "nvme_ioq_poll_period_us": 0, 00:20:02.417 "io_queue_requests": 0, 00:20:02.417 "delay_cmd_submit": true, 00:20:02.417 "transport_retry_count": 4, 00:20:02.417 "bdev_retry_count": 3, 00:20:02.417 "transport_ack_timeout": 0, 00:20:02.417 "ctrlr_loss_timeout_sec": 0, 00:20:02.417 "reconnect_delay_sec": 0, 00:20:02.417 "fast_io_fail_timeout_sec": 0, 00:20:02.417 "disable_auto_failback": false, 00:20:02.417 "generate_uuids": false, 00:20:02.417 "transport_tos": 0, 00:20:02.417 "nvme_error_stat": false, 00:20:02.417 "rdma_srq_size": 0, 00:20:02.417 "io_path_stat": false, 00:20:02.417 "allow_accel_sequence": false, 00:20:02.417 "rdma_max_cq_size": 0, 00:20:02.417 "rdma_cm_event_timeout_ms": 0, 00:20:02.417 "dhchap_digests": [ 00:20:02.417 "sha256", 00:20:02.417 "sha384", 00:20:02.417 "sha512" 00:20:02.417 ], 00:20:02.417 "dhchap_dhgroups": [ 00:20:02.417 "null", 00:20:02.417 "ffdhe2048", 00:20:02.417 "ffdhe3072", 00:20:02.417 "ffdhe4096", 00:20:02.417 "ffdhe6144", 00:20:02.417 "ffdhe8192" 00:20:02.417 ] 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "bdev_nvme_set_hotplug", 00:20:02.417 "params": { 00:20:02.417 "period_us": 100000, 00:20:02.417 "enable": false 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "bdev_malloc_create", 00:20:02.417 "params": { 00:20:02.417 "name": "malloc0", 00:20:02.417 "num_blocks": 8192, 00:20:02.417 "block_size": 4096, 00:20:02.417 "physical_block_size": 4096, 00:20:02.417 "uuid": "87ab2eed-569a-4cba-9c6a-b3b9c6c20b45", 00:20:02.417 "optimal_io_boundary": 0 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "bdev_wait_for_examine" 00:20:02.417 } 00:20:02.417 ] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "nbd", 00:20:02.417 "config": [] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "scheduler", 00:20:02.417 "config": [ 00:20:02.417 { 00:20:02.417 "method": "framework_set_scheduler", 00:20:02.417 "params": { 00:20:02.417 "name": "static" 00:20:02.417 } 00:20:02.417 } 00:20:02.417 ] 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "subsystem": "nvmf", 00:20:02.417 "config": [ 00:20:02.417 { 00:20:02.417 "method": "nvmf_set_config", 00:20:02.417 "params": { 00:20:02.417 "discovery_filter": "match_any", 00:20:02.417 "admin_cmd_passthru": { 00:20:02.417 "identify_ctrlr": false 00:20:02.417 } 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "nvmf_set_max_subsystems", 00:20:02.417 "params": { 00:20:02.417 "max_subsystems": 1024 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "nvmf_set_crdt", 00:20:02.417 "params": { 00:20:02.417 "crdt1": 0, 00:20:02.417 "crdt2": 0, 00:20:02.417 "crdt3": 0 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "nvmf_create_transport", 00:20:02.417 "params": { 00:20:02.417 "trtype": "TCP", 00:20:02.417 "max_queue_depth": 128, 00:20:02.417 "max_io_qpairs_per_ctrlr": 127, 00:20:02.417 "in_capsule_data_size": 4096, 00:20:02.417 "max_io_size": 131072, 00:20:02.417 "io_unit_size": 131072, 00:20:02.417 "max_aq_depth": 128, 00:20:02.417 "num_shared_buffers": 511, 00:20:02.417 "buf_cache_size": 4294967295, 00:20:02.417 "dif_insert_or_strip": false, 00:20:02.417 "zcopy": false, 00:20:02.417 "c2h_success": false, 00:20:02.417 "sock_priority": 0, 00:20:02.417 "abort_timeout_sec": 1, 00:20:02.417 "ack_timeout": 0, 00:20:02.417 "data_wr_pool_size": 0 00:20:02.417 } 00:20:02.417 }, 00:20:02.417 { 00:20:02.417 "method": "nvmf_create_subsystem", 00:20:02.417 "params": { 00:20:02.417 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.417 "allow_any_host": false, 00:20:02.417 "serial_number": "00000000000000000000", 00:20:02.417 "model_number": "SPDK bdev Controller", 00:20:02.417 "max_namespaces": 32, 00:20:02.417 "min_cntlid": 1, 00:20:02.418 "max_cntlid": 65519, 00:20:02.418 "ana_reporting": false 00:20:02.418 } 00:20:02.418 }, 00:20:02.418 { 00:20:02.418 "method": "nvmf_subsystem_add_host", 00:20:02.418 "params": { 00:20:02.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.418 "host": "nqn.2016-06.io.spdk:host1", 00:20:02.418 "psk": "key0" 00:20:02.418 } 00:20:02.418 }, 00:20:02.418 { 00:20:02.418 "method": "nvmf_subsystem_add_ns", 00:20:02.418 "params": { 00:20:02.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.418 "namespace": { 00:20:02.418 "nsid": 1, 00:20:02.418 "bdev_name": "malloc0", 00:20:02.418 "nguid": "87AB2EED569A4CBA9C6AB3B9C6C20B45", 00:20:02.418 "uuid": "87ab2eed-569a-4cba-9c6a-b3b9c6c20b45", 00:20:02.418 "no_auto_visible": false 00:20:02.418 } 00:20:02.418 } 00:20:02.418 }, 00:20:02.418 { 00:20:02.418 "method": "nvmf_subsystem_add_listener", 00:20:02.418 "params": { 00:20:02.418 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.418 "listen_address": { 00:20:02.418 "trtype": "TCP", 00:20:02.418 "adrfam": "IPv4", 00:20:02.418 "traddr": "10.0.0.2", 00:20:02.418 "trsvcid": "4420" 00:20:02.418 }, 00:20:02.418 "secure_channel": true 00:20:02.418 } 00:20:02.418 } 00:20:02.418 ] 00:20:02.418 } 00:20:02.418 ] 00:20:02.418 }' 00:20:02.418 15:56:31 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:02.675 15:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:02.675 "subsystems": [ 00:20:02.675 { 00:20:02.675 "subsystem": "keyring", 00:20:02.675 "config": [ 00:20:02.675 { 00:20:02.675 "method": "keyring_file_add_key", 00:20:02.675 "params": { 00:20:02.675 "name": "key0", 00:20:02.675 "path": "/tmp/tmp.XCAMLqHnav" 00:20:02.675 } 00:20:02.675 } 00:20:02.675 ] 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "subsystem": "iobuf", 00:20:02.675 "config": [ 00:20:02.675 { 00:20:02.675 "method": "iobuf_set_options", 00:20:02.675 "params": { 00:20:02.675 "small_pool_count": 8192, 00:20:02.675 "large_pool_count": 1024, 00:20:02.675 "small_bufsize": 8192, 00:20:02.675 "large_bufsize": 135168 00:20:02.675 } 00:20:02.675 } 00:20:02.675 ] 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "subsystem": "sock", 00:20:02.675 "config": [ 00:20:02.675 { 00:20:02.675 "method": "sock_set_default_impl", 00:20:02.675 "params": { 00:20:02.675 "impl_name": "posix" 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "sock_impl_set_options", 00:20:02.675 "params": { 00:20:02.675 "impl_name": "ssl", 00:20:02.675 "recv_buf_size": 4096, 00:20:02.675 "send_buf_size": 4096, 00:20:02.675 "enable_recv_pipe": true, 00:20:02.675 "enable_quickack": false, 00:20:02.675 "enable_placement_id": 0, 00:20:02.675 "enable_zerocopy_send_server": true, 00:20:02.675 "enable_zerocopy_send_client": false, 00:20:02.675 "zerocopy_threshold": 0, 00:20:02.675 "tls_version": 0, 00:20:02.675 "enable_ktls": false 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "sock_impl_set_options", 00:20:02.675 "params": { 00:20:02.675 "impl_name": "posix", 00:20:02.675 "recv_buf_size": 2097152, 00:20:02.675 "send_buf_size": 2097152, 00:20:02.675 "enable_recv_pipe": true, 00:20:02.675 "enable_quickack": false, 00:20:02.675 "enable_placement_id": 0, 00:20:02.675 "enable_zerocopy_send_server": true, 00:20:02.675 "enable_zerocopy_send_client": false, 00:20:02.675 "zerocopy_threshold": 0, 00:20:02.675 "tls_version": 0, 00:20:02.675 "enable_ktls": false 00:20:02.675 } 00:20:02.675 } 00:20:02.675 ] 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "subsystem": "vmd", 00:20:02.675 "config": [] 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "subsystem": "accel", 00:20:02.675 "config": [ 00:20:02.675 { 00:20:02.675 "method": "accel_set_options", 00:20:02.675 "params": { 00:20:02.675 "small_cache_size": 128, 00:20:02.675 "large_cache_size": 16, 00:20:02.675 "task_count": 2048, 00:20:02.675 "sequence_count": 2048, 00:20:02.675 "buf_count": 2048 00:20:02.675 } 00:20:02.675 } 00:20:02.675 ] 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "subsystem": "bdev", 00:20:02.675 "config": [ 00:20:02.675 { 00:20:02.675 "method": "bdev_set_options", 00:20:02.675 "params": { 00:20:02.675 "bdev_io_pool_size": 65535, 00:20:02.675 "bdev_io_cache_size": 256, 00:20:02.675 "bdev_auto_examine": true, 00:20:02.675 "iobuf_small_cache_size": 128, 00:20:02.675 "iobuf_large_cache_size": 16 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "bdev_raid_set_options", 00:20:02.675 "params": { 00:20:02.675 "process_window_size_kb": 1024 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "bdev_iscsi_set_options", 00:20:02.675 "params": { 00:20:02.675 "timeout_sec": 30 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "bdev_nvme_set_options", 00:20:02.675 "params": { 00:20:02.675 "action_on_timeout": "none", 00:20:02.675 "timeout_us": 0, 00:20:02.675 "timeout_admin_us": 0, 00:20:02.675 "keep_alive_timeout_ms": 10000, 00:20:02.675 "arbitration_burst": 0, 00:20:02.675 "low_priority_weight": 0, 00:20:02.675 "medium_priority_weight": 0, 00:20:02.675 "high_priority_weight": 0, 00:20:02.675 "nvme_adminq_poll_period_us": 10000, 00:20:02.675 "nvme_ioq_poll_period_us": 0, 00:20:02.675 "io_queue_requests": 512, 00:20:02.675 "delay_cmd_submit": true, 00:20:02.675 "transport_retry_count": 4, 00:20:02.675 "bdev_retry_count": 3, 00:20:02.675 "transport_ack_timeout": 0, 00:20:02.675 "ctrlr_loss_timeout_sec": 0, 00:20:02.675 "reconnect_delay_sec": 0, 00:20:02.675 "fast_io_fail_timeout_sec": 0, 00:20:02.675 "disable_auto_failback": false, 00:20:02.675 "generate_uuids": false, 00:20:02.675 "transport_tos": 0, 00:20:02.675 "nvme_error_stat": false, 00:20:02.675 "rdma_srq_size": 0, 00:20:02.675 "io_path_stat": false, 00:20:02.675 "allow_accel_sequence": false, 00:20:02.675 "rdma_max_cq_size": 0, 00:20:02.675 "rdma_cm_event_timeout_ms": 0, 00:20:02.675 "dhchap_digests": [ 00:20:02.675 "sha256", 00:20:02.675 "sha384", 00:20:02.675 "sha512" 00:20:02.675 ], 00:20:02.675 "dhchap_dhgroups": [ 00:20:02.675 "null", 00:20:02.675 "ffdhe2048", 00:20:02.675 "ffdhe3072", 00:20:02.675 "ffdhe4096", 00:20:02.675 "ffdhe6144", 00:20:02.675 "ffdhe8192" 00:20:02.675 ] 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "bdev_nvme_attach_controller", 00:20:02.675 "params": { 00:20:02.675 "name": "nvme0", 00:20:02.675 "trtype": "TCP", 00:20:02.675 "adrfam": "IPv4", 00:20:02.675 "traddr": "10.0.0.2", 00:20:02.675 "trsvcid": "4420", 00:20:02.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.675 "prchk_reftag": false, 00:20:02.675 "prchk_guard": false, 00:20:02.675 "ctrlr_loss_timeout_sec": 0, 00:20:02.675 "reconnect_delay_sec": 0, 00:20:02.675 "fast_io_fail_timeout_sec": 0, 00:20:02.675 "psk": "key0", 00:20:02.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.675 "hdgst": false, 00:20:02.675 "ddgst": false 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "bdev_nvme_set_hotplug", 00:20:02.675 "params": { 00:20:02.675 "period_us": 100000, 00:20:02.675 "enable": false 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "bdev_enable_histogram", 00:20:02.675 "params": { 00:20:02.675 "name": "nvme0n1", 00:20:02.675 "enable": true 00:20:02.675 } 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "method": "bdev_wait_for_examine" 00:20:02.675 } 00:20:02.675 ] 00:20:02.675 }, 00:20:02.675 { 00:20:02.675 "subsystem": "nbd", 00:20:02.675 "config": [] 00:20:02.675 } 00:20:02.675 ] 00:20:02.675 }' 00:20:02.675 15:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 55896 00:20:02.675 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 55896 ']' 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 55896 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 55896 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55896' 00:20:02.676 killing process with pid 55896 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 55896 00:20:02.676 Received shutdown signal, test time was about 1.000000 seconds 00:20:02.676 00:20:02.676 Latency(us) 00:20:02.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.676 =================================================================================================================== 00:20:02.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.676 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 55896 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 55760 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 55760 ']' 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 55760 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 55760 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55760' 00:20:02.933 killing process with pid 55760 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 55760 00:20:02.933 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 55760 00:20:03.190 15:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:03.190 15:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:03.190 15:56:32 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:03.190 "subsystems": [ 00:20:03.190 { 00:20:03.190 "subsystem": "keyring", 00:20:03.190 "config": [ 00:20:03.190 { 00:20:03.190 "method": "keyring_file_add_key", 00:20:03.190 "params": { 00:20:03.190 "name": "key0", 00:20:03.190 "path": "/tmp/tmp.XCAMLqHnav" 00:20:03.190 } 00:20:03.190 } 00:20:03.190 ] 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "subsystem": "iobuf", 00:20:03.190 "config": [ 00:20:03.190 { 00:20:03.190 "method": "iobuf_set_options", 00:20:03.190 "params": { 00:20:03.190 "small_pool_count": 8192, 00:20:03.190 "large_pool_count": 1024, 00:20:03.190 "small_bufsize": 8192, 00:20:03.190 "large_bufsize": 135168 00:20:03.190 } 00:20:03.190 } 00:20:03.190 ] 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "subsystem": "sock", 00:20:03.190 "config": [ 00:20:03.190 { 00:20:03.190 "method": "sock_set_default_impl", 00:20:03.190 "params": { 00:20:03.190 "impl_name": "posix" 00:20:03.190 } 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "method": "sock_impl_set_options", 00:20:03.190 "params": { 00:20:03.190 "impl_name": "ssl", 00:20:03.190 "recv_buf_size": 4096, 00:20:03.190 "send_buf_size": 4096, 00:20:03.190 "enable_recv_pipe": true, 00:20:03.190 "enable_quickack": false, 00:20:03.190 "enable_placement_id": 0, 00:20:03.190 "enable_zerocopy_send_server": true, 00:20:03.190 "enable_zerocopy_send_client": false, 00:20:03.190 "zerocopy_threshold": 0, 00:20:03.190 "tls_version": 0, 00:20:03.190 "enable_ktls": false 00:20:03.190 } 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "method": "sock_impl_set_options", 00:20:03.190 "params": { 00:20:03.190 "impl_name": "posix", 00:20:03.190 "recv_buf_size": 2097152, 00:20:03.190 "send_buf_size": 2097152, 00:20:03.190 "enable_recv_pipe": true, 00:20:03.190 "enable_quickack": false, 00:20:03.190 "enable_placement_id": 0, 00:20:03.190 "enable_zerocopy_send_server": true, 00:20:03.190 "enable_zerocopy_send_client": false, 00:20:03.190 "zerocopy_threshold": 0, 00:20:03.190 "tls_version": 0, 00:20:03.190 "enable_ktls": false 00:20:03.190 } 00:20:03.190 } 00:20:03.190 ] 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "subsystem": "vmd", 00:20:03.190 "config": [] 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "subsystem": "accel", 00:20:03.190 "config": [ 00:20:03.190 { 00:20:03.190 "method": "accel_set_options", 00:20:03.190 "params": { 00:20:03.190 "small_cache_size": 128, 00:20:03.190 "large_cache_size": 16, 00:20:03.190 "task_count": 2048, 00:20:03.190 "sequence_count": 2048, 00:20:03.190 "buf_count": 2048 00:20:03.190 } 00:20:03.190 } 00:20:03.190 ] 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "subsystem": "bdev", 00:20:03.190 "config": [ 00:20:03.190 { 00:20:03.190 "method": "bdev_set_options", 00:20:03.190 "params": { 00:20:03.190 "bdev_io_pool_size": 65535, 00:20:03.191 "bdev_io_cache_size": 256, 00:20:03.191 "bdev_auto_examine": true, 00:20:03.191 "iobuf_small_cache_size": 128, 00:20:03.191 "iobuf_large_cache_size": 16 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "bdev_raid_set_options", 00:20:03.191 "params": { 00:20:03.191 "process_window_size_kb": 1024 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "bdev_iscsi_set_options", 00:20:03.191 "params": { 00:20:03.191 "timeout_sec": 30 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "bdev_nvme_set_options", 00:20:03.191 "params": { 00:20:03.191 "action_on_timeout": "none", 00:20:03.191 "timeout_us": 0, 00:20:03.191 "timeout_admin_us": 0, 00:20:03.191 "keep_alive_timeout_ms": 10000, 00:20:03.191 "arbitration_burst": 0, 00:20:03.191 "low_priority_weight": 0, 00:20:03.191 "medium_priority_weight": 0, 00:20:03.191 "high_priority_weight": 0, 00:20:03.191 "nvme_adminq_poll_period_us": 10000, 00:20:03.191 "nvme_ioq_poll_period_us": 0, 00:20:03.191 "io_queue_requests": 0, 00:20:03.191 "delay_cmd_submit": true, 00:20:03.191 "transport_retry_count": 4, 00:20:03.191 "bdev_retry_count": 3, 00:20:03.191 "transport_ack_timeout": 0, 00:20:03.191 "ctrlr_loss_timeout_sec": 0, 00:20:03.191 "reconnect_delay_sec": 0, 00:20:03.191 "fast_io_fail_timeout_sec": 0, 00:20:03.191 "disable_auto_failback": false, 00:20:03.191 "generate_uuids": false, 00:20:03.191 "transport_tos": 0, 00:20:03.191 "nvme_error_stat": false, 00:20:03.191 "rdma_srq_size": 0, 00:20:03.191 "io_path_stat": false, 00:20:03.191 "allow_accel_sequence": false, 00:20:03.191 "rdma_max_cq_size": 0, 00:20:03.191 "rdma_cm_event_timeout_ms": 0, 00:20:03.191 "dhchap_digests": [ 00:20:03.191 "sha256", 00:20:03.191 "sha384", 00:20:03.191 "sha512" 00:20:03.191 ], 00:20:03.191 "dhchap_dhgroups": [ 00:20:03.191 "null", 00:20:03.191 "ffdhe2048", 00:20:03.191 "ffdhe3072", 00:20:03.191 "ffdhe4096", 00:20:03.191 "ffdhe6144", 00:20:03.191 "ffdhe8192" 00:20:03.191 ] 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "bdev_nvme_set_hotplug", 00:20:03.191 "params": { 00:20:03.191 "period_us": 100000, 00:20:03.191 "enable": false 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "bdev_malloc_create", 00:20:03.191 "params": { 00:20:03.191 "name": "malloc0", 00:20:03.191 "num_blocks": 8192, 00:20:03.191 "block_size": 4096, 00:20:03.191 "physical_block_size": 4096, 00:20:03.191 "uuid": "87ab2eed-569a-4cba-9c6a-b3b9c6c20b45", 00:20:03.191 "optimal_io_boundary": 0 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "bdev_wait_for_examine" 00:20:03.191 } 00:20:03.191 ] 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "subsystem": "nbd", 00:20:03.191 "config": [] 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "subsystem": "scheduler", 00:20:03.191 "config": [ 00:20:03.191 { 00:20:03.191 "method": "framework_set_scheduler", 00:20:03.191 "params": { 00:20:03.191 "name": "static" 00:20:03.191 } 00:20:03.191 } 00:20:03.191 ] 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "subsystem": "nvmf", 00:20:03.191 "config": [ 00:20:03.191 { 00:20:03.191 "method": "nvmf_set_config", 00:20:03.191 "params": { 00:20:03.191 "discovery_filter": "match_any", 00:20:03.191 "admin_cmd_passthru": { 00:20:03.191 "identify_ctrlr": false 00:20:03.191 } 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "nvmf_set_max_subsystems", 00:20:03.191 "params": { 00:20:03.191 "max_subsystems": 1024 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "nvmf_set_crdt", 00:20:03.191 "params": { 00:20:03.191 "crdt1": 0, 00:20:03.191 "crdt2": 0, 00:20:03.191 "crdt3": 0 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "nvmf_create_transport", 00:20:03.191 "params": { 00:20:03.191 "trtype": "TCP", 00:20:03.191 "max_queue_depth": 128, 00:20:03.191 "max_io_qpairs_per_ctrlr": 127, 00:20:03.191 "in_capsule_data_size": 4096, 00:20:03.191 "max_io_size": 131072, 00:20:03.191 "io_unit_size": 131072, 00:20:03.191 "max_aq_depth": 128, 00:20:03.191 "num_shared_buffers": 511, 00:20:03.191 "buf_cache_size": 4294967295, 00:20:03.191 "dif_insert_or_strip": false, 00:20:03.191 "zcopy": false, 00:20:03.191 "c2h_success": false, 00:20:03.191 "sock_priority": 0, 00:20:03.191 "abort_timeout_sec": 1, 00:20:03.191 "ack_timeout": 0, 00:20:03.191 "data_wr_pool_size": 0 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "nvmf_create_subsystem", 00:20:03.191 "params": { 00:20:03.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.191 "allow_any_host": false, 00:20:03.191 "serial_number": "00000000000000000000", 00:20:03.191 "model_number": "SPDK bdev Controller", 00:20:03.191 "max_namespaces": 32, 00:20:03.191 "min_cntlid": 1, 00:20:03.191 "max_cntlid": 65519, 00:20:03.191 "ana_reporting": false 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "nvmf_subsystem_add_host", 00:20:03.191 "params": { 00:20:03.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.191 "host": "nqn.2016-06.io.spdk:host1", 00:20:03.191 "psk": "key0" 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "nvmf_subsystem_add_ns", 00:20:03.191 "params": { 00:20:03.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.191 "namespace": { 00:20:03.191 "nsid": 1, 00:20:03.191 "bdev_name": "malloc0", 00:20:03.191 "nguid": "87AB2EED569A4CBA9C6AB3B9C6C20B45", 00:20:03.191 "uuid": "87ab2eed-569a-4cba-9c6a-b3b9c6c20b45", 00:20:03.191 "no_auto_visible": false 00:20:03.191 } 00:20:03.191 } 00:20:03.191 }, 00:20:03.191 { 00:20:03.191 "method": "nvmf_subsystem_add_listener", 00:20:03.191 "params": { 00:20:03.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.191 "listen_address": { 00:20:03.191 "trtype": "TCP", 00:20:03.191 "adrfam": "IPv4", 00:20:03.191 "traddr": "10.0.0.2", 00:20:03.191 "trsvcid": "4420" 00:20:03.191 }, 00:20:03.191 "secure_channel": true 00:20:03.191 } 00:20:03.191 } 00:20:03.191 ] 00:20:03.191 } 00:20:03.191 ] 00:20:03.191 }' 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=56223 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 56223 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 56223 ']' 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:03.191 15:56:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.191 [2024-07-12 15:56:32.879050] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:20:03.191 [2024-07-12 15:56:32.879148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.191 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.456 [2024-07-12 15:56:32.942157] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.456 [2024-07-12 15:56:33.053167] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.456 [2024-07-12 15:56:33.053222] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.456 [2024-07-12 15:56:33.053235] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.456 [2024-07-12 15:56:33.053246] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.456 [2024-07-12 15:56:33.053255] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.456 [2024-07-12 15:56:33.053358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.721 [2024-07-12 15:56:33.292912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.721 [2024-07-12 15:56:33.324943] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.721 [2024-07-12 15:56:33.333517] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=56371 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 56371 /var/tmp/bdevperf.sock 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 56371 ']' 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 15:56:33 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:04.285 "subsystems": [ 00:20:04.285 { 00:20:04.285 "subsystem": "keyring", 00:20:04.285 "config": [ 00:20:04.285 { 00:20:04.285 "method": "keyring_file_add_key", 00:20:04.285 "params": { 00:20:04.285 "name": "key0", 00:20:04.285 "path": "/tmp/tmp.XCAMLqHnav" 00:20:04.285 } 00:20:04.285 } 00:20:04.285 ] 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "subsystem": "iobuf", 00:20:04.285 "config": [ 00:20:04.285 { 00:20:04.285 "method": "iobuf_set_options", 00:20:04.285 "params": { 00:20:04.285 "small_pool_count": 8192, 00:20:04.285 "large_pool_count": 1024, 00:20:04.285 "small_bufsize": 8192, 00:20:04.285 "large_bufsize": 135168 00:20:04.285 } 00:20:04.285 } 00:20:04.285 ] 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "subsystem": "sock", 00:20:04.285 "config": [ 00:20:04.285 { 00:20:04.285 "method": "sock_set_default_impl", 00:20:04.285 "params": { 00:20:04.285 "impl_name": "posix" 00:20:04.285 } 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "method": "sock_impl_set_options", 00:20:04.285 "params": { 00:20:04.285 "impl_name": "ssl", 00:20:04.285 "recv_buf_size": 4096, 00:20:04.285 "send_buf_size": 4096, 00:20:04.285 "enable_recv_pipe": true, 00:20:04.285 "enable_quickack": false, 00:20:04.285 "enable_placement_id": 0, 00:20:04.285 "enable_zerocopy_send_server": true, 00:20:04.285 "enable_zerocopy_send_client": false, 00:20:04.285 "zerocopy_threshold": 0, 00:20:04.285 "tls_version": 0, 00:20:04.285 "enable_ktls": false 00:20:04.285 } 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "method": "sock_impl_set_options", 00:20:04.285 "params": { 00:20:04.285 "impl_name": "posix", 00:20:04.285 "recv_buf_size": 2097152, 00:20:04.285 "send_buf_size": 2097152, 00:20:04.285 "enable_recv_pipe": true, 00:20:04.285 "enable_quickack": false, 00:20:04.285 "enable_placement_id": 0, 00:20:04.285 "enable_zerocopy_send_server": true, 00:20:04.285 "enable_zerocopy_send_client": false, 00:20:04.285 "zerocopy_threshold": 0, 00:20:04.285 "tls_version": 0, 00:20:04.285 "enable_ktls": false 00:20:04.285 } 00:20:04.285 } 00:20:04.285 ] 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "subsystem": "vmd", 00:20:04.285 "config": [] 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "subsystem": "accel", 00:20:04.285 "config": [ 00:20:04.285 { 00:20:04.285 "method": "accel_set_options", 00:20:04.285 "params": { 00:20:04.285 "small_cache_size": 128, 00:20:04.285 "large_cache_size": 16, 00:20:04.285 "task_count": 2048, 00:20:04.285 "sequence_count": 2048, 00:20:04.285 "buf_count": 2048 00:20:04.285 } 00:20:04.285 } 00:20:04.285 ] 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "subsystem": "bdev", 00:20:04.285 "config": [ 00:20:04.285 { 00:20:04.285 "method": "bdev_set_options", 00:20:04.285 "params": { 00:20:04.285 "bdev_io_pool_size": 65535, 00:20:04.285 "bdev_io_cache_size": 256, 00:20:04.285 "bdev_auto_examine": true, 00:20:04.285 "iobuf_small_cache_size": 128, 00:20:04.285 "iobuf_large_cache_size": 16 00:20:04.285 } 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "method": "bdev_raid_set_options", 00:20:04.285 "params": { 00:20:04.285 "process_window_size_kb": 1024 00:20:04.285 } 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "method": "bdev_iscsi_set_options", 00:20:04.285 "params": { 00:20:04.285 "timeout_sec": 30 00:20:04.285 } 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "method": "bdev_nvme_set_options", 00:20:04.285 "params": { 00:20:04.285 "action_on_timeout": "none", 00:20:04.285 "timeout_us": 0, 00:20:04.285 "timeout_admin_us": 0, 00:20:04.285 "keep_alive_timeout_ms": 10000, 00:20:04.285 "arbitration_burst": 0, 00:20:04.285 "low_priority_weight": 0, 00:20:04.285 "medium_priority_weight": 0, 00:20:04.285 "high_priority_weight": 0, 00:20:04.285 "nvme_adminq_poll_period_us": 10000, 00:20:04.285 "nvme_ioq_poll_period_us": 0, 00:20:04.285 "io_queue_requests": 512, 00:20:04.285 "delay_cmd_submit": true, 00:20:04.285 "transport_retry_count": 4, 00:20:04.285 "bdev_retry_count": 3, 00:20:04.285 "transport_ack_timeout": 0, 00:20:04.285 "ctrlr_loss_timeout_sec": 0, 00:20:04.285 "reconnect_delay_sec": 0, 00:20:04.285 "fast_io_fail_timeout_sec": 0, 00:20:04.285 "disable_auto_failback": false, 00:20:04.285 "generate_uuids": false, 00:20:04.285 "transport_tos": 0, 00:20:04.285 "nvme_error_stat": false, 00:20:04.285 "rdma_srq_size": 0, 00:20:04.286 "io_path_stat": false, 00:20:04.286 "allow_accel_sequence": false, 00:20:04.286 "rdma_max_cq_size": 0, 00:20:04.286 "rdma_cm_event_timeout_ms": 0, 00:20:04.286 "dhchap_digests": [ 00:20:04.286 "sha256", 00:20:04.286 "sha384", 00:20:04.286 "sha512" 00:20:04.286 ], 00:20:04.286 "dhchap_dhgroups": [ 00:20:04.286 "null", 00:20:04.286 "ffdhe2048", 00:20:04.286 "ffdhe3072", 00:20:04.286 "ffdhe4096", 00:20:04.286 "ffdhe6144", 00:20:04.286 "ffdhe8192" 00:20:04.286 ] 00:20:04.286 } 00:20:04.286 }, 00:20:04.286 { 00:20:04.286 "method": "bdev_nvme_attach_controller", 00:20:04.286 "params": { 00:20:04.286 "name": "nvme0", 00:20:04.286 "trtype": "TCP", 00:20:04.286 "adrfam": "IPv4", 00:20:04.286 "traddr": "10.0.0.2", 00:20:04.286 "trsvcid": "4420", 00:20:04.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.286 "prchk_reftag": false, 00:20:04.286 "prchk_guard": false, 00:20:04.286 "ctrlr_loss_timeout_sec": 0, 00:20:04.286 "reconnect_delay_sec": 0, 00:20:04.286 "fast_io_fail_timeout_sec": 0, 00:20:04.286 "psk": "key0", 00:20:04.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.286 "hdgst": false, 00:20:04.286 "ddgst": false 00:20:04.286 } 00:20:04.286 }, 00:20:04.286 { 00:20:04.286 "method": "bdev_nvme_set_hotplug", 00:20:04.286 "params": { 00:20:04.286 "period_us": 100000, 00:20:04.286 "enable": false 00:20:04.286 } 00:20:04.286 }, 00:20:04.286 { 00:20:04.286 "method": "bdev_enable_histogram", 00:20:04.286 "params": { 00:20:04.286 "name": "nvme0n1", 00:20:04.286 "enable": true 00:20:04.286 } 00:20:04.286 }, 00:20:04.286 { 00:20:04.286 "method": "bdev_wait_for_examine" 00:20:04.286 } 00:20:04.286 ] 00:20:04.286 }, 00:20:04.286 { 00:20:04.286 "subsystem": "nbd", 00:20:04.286 "config": [] 00:20:04.286 } 00:20:04.286 ] 00:20:04.286 }' 00:20:04.286 [2024-07-12 15:56:33.950185] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:20:04.286 [2024-07-12 15:56:33.950277] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56371 ] 00:20:04.286 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.286 [2024-07-12 15:56:34.009189] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.543 [2024-07-12 15:56:34.124587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.800 [2024-07-12 15:56:34.302013] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.364 15:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.364 15:56:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:05.364 15:56:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.364 15:56:34 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:05.622 15:56:35 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.622 15:56:35 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:05.622 Running I/O for 1 seconds... 00:20:06.994 00:20:06.994 Latency(us) 00:20:06.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.994 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:06.994 Verification LBA range: start 0x0 length 0x2000 00:20:06.994 nvme0n1 : 1.04 2662.47 10.40 0.00 0.00 47149.18 6456.51 69128.34 00:20:06.994 =================================================================================================================== 00:20:06.994 Total : 2662.47 10.40 0.00 0.00 47149.18 6456.51 69128.34 00:20:06.994 0 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:06.994 nvmf_trace.0 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 56371 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 56371 ']' 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 56371 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 56371 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56371' 00:20:06.994 killing process with pid 56371 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 56371 00:20:06.994 Received shutdown signal, test time was about 1.000000 seconds 00:20:06.994 00:20:06.994 Latency(us) 00:20:06.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.994 =================================================================================================================== 00:20:06.994 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 56371 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.994 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:07.252 rmmod nvme_tcp 00:20:07.252 rmmod nvme_fabrics 00:20:07.252 rmmod nvme_keyring 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 56223 ']' 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 56223 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 56223 ']' 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 56223 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 56223 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56223' 00:20:07.252 killing process with pid 56223 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 56223 00:20:07.252 15:56:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 56223 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.510 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.412 15:56:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.412 15:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.G6NgJAOINq /tmp/tmp.6ifK2kMWNP /tmp/tmp.XCAMLqHnav 00:20:09.412 00:20:09.412 real 1m20.504s 00:20:09.412 user 2m0.015s 00:20:09.412 sys 0m28.765s 00:20:09.412 15:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.412 15:56:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.412 ************************************ 00:20:09.412 END TEST nvmf_tls 00:20:09.412 ************************************ 00:20:09.671 15:56:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:09.672 15:56:39 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:09.672 15:56:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:09.672 15:56:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.672 15:56:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:09.672 ************************************ 00:20:09.672 START TEST nvmf_fips 00:20:09.672 ************************************ 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:09.672 * Looking for test storage... 00:20:09.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:09.672 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:09.673 Error setting digest 00:20:09.673 0092D6B1597F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:09.673 0092D6B1597F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.673 15:56:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:12.261 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:12.261 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.261 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:12.262 Found net devices under 0000:09:00.0: cvl_0_0 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:12.262 Found net devices under 0000:09:00.1: cvl_0_1 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:20:12.262 00:20:12.262 --- 10.0.0.2 ping statistics --- 00:20:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.262 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:20:12.262 00:20:12.262 --- 10.0.0.1 ping statistics --- 00:20:12.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.262 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=58730 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 58730 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 58730 ']' 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.262 15:56:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.262 [2024-07-12 15:56:41.704381] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:20:12.262 [2024-07-12 15:56:41.704466] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.262 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.262 [2024-07-12 15:56:41.767754] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.262 [2024-07-12 15:56:41.879292] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.262 [2024-07-12 15:56:41.879357] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.262 [2024-07-12 15:56:41.879373] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.262 [2024-07-12 15:56:41.879385] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.262 [2024-07-12 15:56:41.879395] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.262 [2024-07-12 15:56:41.879422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:13.195 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.195 [2024-07-12 15:56:42.866361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.195 [2024-07-12 15:56:42.882332] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:13.195 [2024-07-12 15:56:42.882524] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.195 [2024-07-12 15:56:42.912351] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:13.195 malloc0 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=58891 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 58891 /var/tmp/bdevperf.sock 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 58891 ']' 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:13.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.452 15:56:42 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.452 [2024-07-12 15:56:42.997524] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:20:13.452 [2024-07-12 15:56:42.997609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:20:13.452 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.452 [2024-07-12 15:56:43.054591] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.452 [2024-07-12 15:56:43.163855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.383 15:56:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.383 15:56:43 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:14.383 15:56:43 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:14.640 [2024-07-12 15:56:44.222234] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.640 [2024-07-12 15:56:44.222384] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:14.640 TLSTESTn1 00:20:14.640 15:56:44 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.897 Running I/O for 10 seconds... 00:20:24.851 00:20:24.851 Latency(us) 00:20:24.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.851 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:24.851 Verification LBA range: start 0x0 length 0x2000 00:20:24.851 TLSTESTn1 : 10.04 2725.04 10.64 0.00 0.00 46851.26 6165.24 89711.50 00:20:24.851 =================================================================================================================== 00:20:24.851 Total : 2725.04 10.64 0.00 0.00 46851.26 6165.24 89711.50 00:20:24.851 0 00:20:24.851 15:56:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:24.851 15:56:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:24.851 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:24.851 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:24.851 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:24.851 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:24.851 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:24.852 nvmf_trace.0 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 58891 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 58891 ']' 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 58891 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:24.852 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58891 00:20:25.109 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:25.109 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:25.109 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58891' 00:20:25.109 killing process with pid 58891 00:20:25.109 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 58891 00:20:25.109 Received shutdown signal, test time was about 10.000000 seconds 00:20:25.109 00:20:25.109 Latency(us) 00:20:25.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.109 =================================================================================================================== 00:20:25.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.109 [2024-07-12 15:56:54.593773] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.109 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 58891 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.367 rmmod nvme_tcp 00:20:25.367 rmmod nvme_fabrics 00:20:25.367 rmmod nvme_keyring 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 58730 ']' 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 58730 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 58730 ']' 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 58730 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58730 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58730' 00:20:25.367 killing process with pid 58730 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 58730 00:20:25.367 [2024-07-12 15:56:54.933517] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:25.367 15:56:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 58730 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.627 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.533 15:56:57 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.533 15:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:27.533 00:20:27.533 real 0m18.056s 00:20:27.533 user 0m23.081s 00:20:27.533 sys 0m6.538s 00:20:27.533 15:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.533 15:56:57 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:27.533 ************************************ 00:20:27.533 END TEST nvmf_fips 00:20:27.533 ************************************ 00:20:27.533 15:56:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:27.533 15:56:57 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:27.533 15:56:57 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:27.533 15:56:57 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:27.533 15:56:57 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:27.533 15:56:57 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.533 15:56:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.066 15:56:59 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.066 15:56:59 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.066 15:56:59 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.066 15:56:59 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.066 15:56:59 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.066 15:56:59 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.066 15:56:59 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:30.067 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:30.067 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:30.067 Found net devices under 0000:09:00.0: cvl_0_0 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:30.067 Found net devices under 0000:09:00.1: cvl_0_1 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:30.067 15:56:59 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:30.067 15:56:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:30.067 15:56:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.067 15:56:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.067 ************************************ 00:20:30.067 START TEST nvmf_perf_adq 00:20:30.067 ************************************ 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:30.067 * Looking for test storage... 00:20:30.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.067 15:56:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:31.979 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:31.979 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:31.979 Found net devices under 0000:09:00.0: cvl_0_0 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:31.979 Found net devices under 0000:09:00.1: cvl_0_1 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:31.979 15:57:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:32.545 15:57:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:35.075 15:57:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:40.374 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:40.375 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:40.375 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:40.375 Found net devices under 0000:09:00.0: cvl_0_0 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:40.375 Found net devices under 0000:09:00.1: cvl_0_1 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:40.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:20:40.375 00:20:40.375 --- 10.0.0.2 ping statistics --- 00:20:40.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.375 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:20:40.375 00:20:40.375 --- 10.0.0.1 ping statistics --- 00:20:40.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.375 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=65405 00:20:40.375 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 65405 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 65405 ']' 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 [2024-07-12 15:57:09.420495] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:20:40.376 [2024-07-12 15:57:09.420591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.376 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.376 [2024-07-12 15:57:09.485176] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.376 [2024-07-12 15:57:09.597870] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.376 [2024-07-12 15:57:09.597937] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.376 [2024-07-12 15:57:09.597966] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.376 [2024-07-12 15:57:09.597977] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.376 [2024-07-12 15:57:09.597986] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.376 [2024-07-12 15:57:09.598071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.376 [2024-07-12 15:57:09.598144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.376 [2024-07-12 15:57:09.598213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.376 [2024-07-12 15:57:09.598216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 [2024-07-12 15:57:09.814549] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 Malloc1 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.376 [2024-07-12 15:57:09.866266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=65554 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:40.376 15:57:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:40.376 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:42.275 "tick_rate": 2700000000, 00:20:42.275 "poll_groups": [ 00:20:42.275 { 00:20:42.275 "name": "nvmf_tgt_poll_group_000", 00:20:42.275 "admin_qpairs": 1, 00:20:42.275 "io_qpairs": 1, 00:20:42.275 "current_admin_qpairs": 1, 00:20:42.275 "current_io_qpairs": 1, 00:20:42.275 "pending_bdev_io": 0, 00:20:42.275 "completed_nvme_io": 19838, 00:20:42.275 "transports": [ 00:20:42.275 { 00:20:42.275 "trtype": "TCP" 00:20:42.275 } 00:20:42.275 ] 00:20:42.275 }, 00:20:42.275 { 00:20:42.275 "name": "nvmf_tgt_poll_group_001", 00:20:42.275 "admin_qpairs": 0, 00:20:42.275 "io_qpairs": 1, 00:20:42.275 "current_admin_qpairs": 0, 00:20:42.275 "current_io_qpairs": 1, 00:20:42.275 "pending_bdev_io": 0, 00:20:42.275 "completed_nvme_io": 20195, 00:20:42.275 "transports": [ 00:20:42.275 { 00:20:42.275 "trtype": "TCP" 00:20:42.275 } 00:20:42.275 ] 00:20:42.275 }, 00:20:42.275 { 00:20:42.275 "name": "nvmf_tgt_poll_group_002", 00:20:42.275 "admin_qpairs": 0, 00:20:42.275 "io_qpairs": 1, 00:20:42.275 "current_admin_qpairs": 0, 00:20:42.275 "current_io_qpairs": 1, 00:20:42.275 "pending_bdev_io": 0, 00:20:42.275 "completed_nvme_io": 20574, 00:20:42.275 "transports": [ 00:20:42.275 { 00:20:42.275 "trtype": "TCP" 00:20:42.275 } 00:20:42.275 ] 00:20:42.275 }, 00:20:42.275 { 00:20:42.275 "name": "nvmf_tgt_poll_group_003", 00:20:42.275 "admin_qpairs": 0, 00:20:42.275 "io_qpairs": 1, 00:20:42.275 "current_admin_qpairs": 0, 00:20:42.275 "current_io_qpairs": 1, 00:20:42.275 "pending_bdev_io": 0, 00:20:42.275 "completed_nvme_io": 20552, 00:20:42.275 "transports": [ 00:20:42.275 { 00:20:42.275 "trtype": "TCP" 00:20:42.275 } 00:20:42.275 ] 00:20:42.275 } 00:20:42.275 ] 00:20:42.275 }' 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:42.275 15:57:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 65554 00:20:50.377 Initializing NVMe Controllers 00:20:50.377 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:50.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:50.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:50.377 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:50.377 Initialization complete. Launching workers. 00:20:50.377 ======================================================== 00:20:50.378 Latency(us) 00:20:50.378 Device Information : IOPS MiB/s Average min max 00:20:50.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10727.10 41.90 5966.00 2321.12 8342.54 00:20:50.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10587.30 41.36 6046.72 3158.29 7417.87 00:20:50.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10816.30 42.25 5918.78 2340.20 8459.68 00:20:50.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10433.90 40.76 6136.15 1899.21 10313.14 00:20:50.378 ======================================================== 00:20:50.378 Total : 42564.60 166.27 6015.79 1899.21 10313.14 00:20:50.378 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.378 rmmod nvme_tcp 00:20:50.378 rmmod nvme_fabrics 00:20:50.378 rmmod nvme_keyring 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 65405 ']' 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 65405 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 65405 ']' 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 65405 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65405 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65405' 00:20:50.378 killing process with pid 65405 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 65405 00:20:50.378 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 65405 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.942 15:57:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.841 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.841 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:52.841 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:53.406 15:57:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:55.304 15:57:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:00.583 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:00.584 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:00.584 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:00.584 Found net devices under 0000:09:00.0: cvl_0_0 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:00.584 Found net devices under 0000:09:00.1: cvl_0_1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:00.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:21:00.584 00:21:00.584 --- 10.0.0.2 ping statistics --- 00:21:00.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.584 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:00.584 00:21:00.584 --- 10.0.0.1 ping statistics --- 00:21:00.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.584 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:00.584 net.core.busy_poll = 1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:00.584 net.core.busy_read = 1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:00.584 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=68121 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 68121 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 68121 ']' 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.876 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.876 [2024-07-12 15:57:30.375597] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:00.876 [2024-07-12 15:57:30.375699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.876 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.876 [2024-07-12 15:57:30.440243] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.876 [2024-07-12 15:57:30.552411] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.876 [2024-07-12 15:57:30.552464] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.876 [2024-07-12 15:57:30.552494] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.876 [2024-07-12 15:57:30.552506] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.876 [2024-07-12 15:57:30.552516] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.876 [2024-07-12 15:57:30.552568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.876 [2024-07-12 15:57:30.552627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.876 [2024-07-12 15:57:30.552672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.876 [2024-07-12 15:57:30.552675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.133 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.133 [2024-07-12 15:57:30.768344] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.134 Malloc1 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.134 [2024-07-12 15:57:30.822057] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=68210 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:01.134 15:57:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:01.134 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:03.659 "tick_rate": 2700000000, 00:21:03.659 "poll_groups": [ 00:21:03.659 { 00:21:03.659 "name": "nvmf_tgt_poll_group_000", 00:21:03.659 "admin_qpairs": 1, 00:21:03.659 "io_qpairs": 4, 00:21:03.659 "current_admin_qpairs": 1, 00:21:03.659 "current_io_qpairs": 4, 00:21:03.659 "pending_bdev_io": 0, 00:21:03.659 "completed_nvme_io": 34888, 00:21:03.659 "transports": [ 00:21:03.659 { 00:21:03.659 "trtype": "TCP" 00:21:03.659 } 00:21:03.659 ] 00:21:03.659 }, 00:21:03.659 { 00:21:03.659 "name": "nvmf_tgt_poll_group_001", 00:21:03.659 "admin_qpairs": 0, 00:21:03.659 "io_qpairs": 0, 00:21:03.659 "current_admin_qpairs": 0, 00:21:03.659 "current_io_qpairs": 0, 00:21:03.659 "pending_bdev_io": 0, 00:21:03.659 "completed_nvme_io": 0, 00:21:03.659 "transports": [ 00:21:03.659 { 00:21:03.659 "trtype": "TCP" 00:21:03.659 } 00:21:03.659 ] 00:21:03.659 }, 00:21:03.659 { 00:21:03.659 "name": "nvmf_tgt_poll_group_002", 00:21:03.659 "admin_qpairs": 0, 00:21:03.659 "io_qpairs": 0, 00:21:03.659 "current_admin_qpairs": 0, 00:21:03.659 "current_io_qpairs": 0, 00:21:03.659 "pending_bdev_io": 0, 00:21:03.659 "completed_nvme_io": 0, 00:21:03.659 "transports": [ 00:21:03.659 { 00:21:03.659 "trtype": "TCP" 00:21:03.659 } 00:21:03.659 ] 00:21:03.659 }, 00:21:03.659 { 00:21:03.659 "name": "nvmf_tgt_poll_group_003", 00:21:03.659 "admin_qpairs": 0, 00:21:03.659 "io_qpairs": 0, 00:21:03.659 "current_admin_qpairs": 0, 00:21:03.659 "current_io_qpairs": 0, 00:21:03.659 "pending_bdev_io": 0, 00:21:03.659 "completed_nvme_io": 0, 00:21:03.659 "transports": [ 00:21:03.659 { 00:21:03.659 "trtype": "TCP" 00:21:03.659 } 00:21:03.659 ] 00:21:03.659 } 00:21:03.659 ] 00:21:03.659 }' 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:21:03.659 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 68210 00:21:11.757 Initializing NVMe Controllers 00:21:11.757 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:11.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:11.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:11.757 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:11.757 Initialization complete. Launching workers. 00:21:11.757 ======================================================== 00:21:11.757 Latency(us) 00:21:11.757 Device Information : IOPS MiB/s Average min max 00:21:11.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4375.90 17.09 14627.06 2134.07 62710.24 00:21:11.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4761.90 18.60 13447.22 2033.33 60395.43 00:21:11.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4389.70 17.15 14588.15 2284.34 61277.03 00:21:11.757 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4498.60 17.57 14237.57 2038.60 59616.26 00:21:11.757 ======================================================== 00:21:11.757 Total : 18026.10 70.41 14208.71 2033.33 62710.24 00:21:11.757 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.757 rmmod nvme_tcp 00:21:11.757 rmmod nvme_fabrics 00:21:11.757 rmmod nvme_keyring 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 68121 ']' 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 68121 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 68121 ']' 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 68121 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.757 15:57:40 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68121 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68121' 00:21:11.757 killing process with pid 68121 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 68121 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 68121 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.757 15:57:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.656 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.656 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:13.656 00:21:13.656 real 0m43.993s 00:21:13.656 user 2m30.556s 00:21:13.656 sys 0m12.563s 00:21:13.656 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.656 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:13.656 ************************************ 00:21:13.656 END TEST nvmf_perf_adq 00:21:13.656 ************************************ 00:21:13.913 15:57:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:13.913 15:57:43 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:13.913 15:57:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:13.913 15:57:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.913 15:57:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:13.913 ************************************ 00:21:13.913 START TEST nvmf_shutdown 00:21:13.913 ************************************ 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:13.913 * Looking for test storage... 00:21:13.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.913 15:57:43 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:13.914 ************************************ 00:21:13.914 START TEST nvmf_shutdown_tc1 00:21:13.914 ************************************ 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.914 15:57:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:16.443 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:16.443 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:16.443 Found net devices under 0000:09:00.0: cvl_0_0 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:16.443 Found net devices under 0000:09:00.1: cvl_0_1 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:16.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:21:16.443 00:21:16.443 --- 10.0.0.2 ping statistics --- 00:21:16.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.443 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:16.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:16.443 00:21:16.443 --- 10.0.0.1 ping statistics --- 00:21:16.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.443 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.443 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=71367 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 71367 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 71367 ']' 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.444 15:57:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.444 [2024-07-12 15:57:45.778028] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:16.444 [2024-07-12 15:57:45.778104] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.444 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.444 [2024-07-12 15:57:45.842646] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.444 [2024-07-12 15:57:45.956353] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.444 [2024-07-12 15:57:45.956419] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.444 [2024-07-12 15:57:45.956433] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.444 [2024-07-12 15:57:45.956444] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.444 [2024-07-12 15:57:45.956454] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.444 [2024-07-12 15:57:45.956547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.444 [2024-07-12 15:57:45.956612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.444 [2024-07-12 15:57:45.956680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:16.444 [2024-07-12 15:57:45.956683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.444 [2024-07-12 15:57:46.114129] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.444 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.702 Malloc1 00:21:16.702 [2024-07-12 15:57:46.204412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.702 Malloc2 00:21:16.702 Malloc3 00:21:16.702 Malloc4 00:21:16.702 Malloc5 00:21:16.702 Malloc6 00:21:16.960 Malloc7 00:21:16.960 Malloc8 00:21:16.960 Malloc9 00:21:16.960 Malloc10 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=71541 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 71541 /var/tmp/bdevperf.sock 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 71541 ']' 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.960 { 00:21:16.960 "params": { 00:21:16.960 "name": "Nvme$subsystem", 00:21:16.960 "trtype": "$TEST_TRANSPORT", 00:21:16.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.960 "adrfam": "ipv4", 00:21:16.960 "trsvcid": "$NVMF_PORT", 00:21:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.960 "hdgst": ${hdgst:-false}, 00:21:16.960 "ddgst": ${ddgst:-false} 00:21:16.960 }, 00:21:16.960 "method": "bdev_nvme_attach_controller" 00:21:16.960 } 00:21:16.960 EOF 00:21:16.960 )") 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.960 { 00:21:16.960 "params": { 00:21:16.960 "name": "Nvme$subsystem", 00:21:16.960 "trtype": "$TEST_TRANSPORT", 00:21:16.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.960 "adrfam": "ipv4", 00:21:16.960 "trsvcid": "$NVMF_PORT", 00:21:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.960 "hdgst": ${hdgst:-false}, 00:21:16.960 "ddgst": ${ddgst:-false} 00:21:16.960 }, 00:21:16.960 "method": "bdev_nvme_attach_controller" 00:21:16.960 } 00:21:16.960 EOF 00:21:16.960 )") 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.960 { 00:21:16.960 "params": { 00:21:16.960 "name": "Nvme$subsystem", 00:21:16.960 "trtype": "$TEST_TRANSPORT", 00:21:16.960 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.960 "adrfam": "ipv4", 00:21:16.960 "trsvcid": "$NVMF_PORT", 00:21:16.960 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.960 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.960 "hdgst": ${hdgst:-false}, 00:21:16.960 "ddgst": ${ddgst:-false} 00:21:16.960 }, 00:21:16.960 "method": "bdev_nvme_attach_controller" 00:21:16.960 } 00:21:16.960 EOF 00:21:16.960 )") 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.960 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.960 { 00:21:16.960 "params": { 00:21:16.960 "name": "Nvme$subsystem", 00:21:16.961 "trtype": "$TEST_TRANSPORT", 00:21:16.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.961 "adrfam": "ipv4", 00:21:16.961 "trsvcid": "$NVMF_PORT", 00:21:16.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.961 "hdgst": ${hdgst:-false}, 00:21:16.961 "ddgst": ${ddgst:-false} 00:21:16.961 }, 00:21:16.961 "method": "bdev_nvme_attach_controller" 00:21:16.961 } 00:21:16.961 EOF 00:21:16.961 )") 00:21:16.961 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:16.961 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:16.961 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:16.961 { 00:21:16.961 "params": { 00:21:16.961 "name": "Nvme$subsystem", 00:21:16.961 "trtype": "$TEST_TRANSPORT", 00:21:16.961 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.961 "adrfam": "ipv4", 00:21:16.961 "trsvcid": "$NVMF_PORT", 00:21:16.961 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.961 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.961 "hdgst": ${hdgst:-false}, 00:21:16.961 "ddgst": ${ddgst:-false} 00:21:16.961 }, 00:21:16.961 "method": "bdev_nvme_attach_controller" 00:21:16.961 } 00:21:16.961 EOF 00:21:16.961 )") 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:17.218 { 00:21:17.218 "params": { 00:21:17.218 "name": "Nvme$subsystem", 00:21:17.218 "trtype": "$TEST_TRANSPORT", 00:21:17.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.218 "adrfam": "ipv4", 00:21:17.218 "trsvcid": "$NVMF_PORT", 00:21:17.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.218 "hdgst": ${hdgst:-false}, 00:21:17.218 "ddgst": ${ddgst:-false} 00:21:17.218 }, 00:21:17.218 "method": "bdev_nvme_attach_controller" 00:21:17.218 } 00:21:17.218 EOF 00:21:17.218 )") 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:17.218 { 00:21:17.218 "params": { 00:21:17.218 "name": "Nvme$subsystem", 00:21:17.218 "trtype": "$TEST_TRANSPORT", 00:21:17.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.218 "adrfam": "ipv4", 00:21:17.218 "trsvcid": "$NVMF_PORT", 00:21:17.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.218 "hdgst": ${hdgst:-false}, 00:21:17.218 "ddgst": ${ddgst:-false} 00:21:17.218 }, 00:21:17.218 "method": "bdev_nvme_attach_controller" 00:21:17.218 } 00:21:17.218 EOF 00:21:17.218 )") 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:17.218 { 00:21:17.218 "params": { 00:21:17.218 "name": "Nvme$subsystem", 00:21:17.218 "trtype": "$TEST_TRANSPORT", 00:21:17.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.218 "adrfam": "ipv4", 00:21:17.218 "trsvcid": "$NVMF_PORT", 00:21:17.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.218 "hdgst": ${hdgst:-false}, 00:21:17.218 "ddgst": ${ddgst:-false} 00:21:17.218 }, 00:21:17.218 "method": "bdev_nvme_attach_controller" 00:21:17.218 } 00:21:17.218 EOF 00:21:17.218 )") 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:17.218 { 00:21:17.218 "params": { 00:21:17.218 "name": "Nvme$subsystem", 00:21:17.218 "trtype": "$TEST_TRANSPORT", 00:21:17.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.218 "adrfam": "ipv4", 00:21:17.218 "trsvcid": "$NVMF_PORT", 00:21:17.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.218 "hdgst": ${hdgst:-false}, 00:21:17.218 "ddgst": ${ddgst:-false} 00:21:17.218 }, 00:21:17.218 "method": "bdev_nvme_attach_controller" 00:21:17.218 } 00:21:17.218 EOF 00:21:17.218 )") 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:17.218 { 00:21:17.218 "params": { 00:21:17.218 "name": "Nvme$subsystem", 00:21:17.218 "trtype": "$TEST_TRANSPORT", 00:21:17.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.218 "adrfam": "ipv4", 00:21:17.218 "trsvcid": "$NVMF_PORT", 00:21:17.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.218 "hdgst": ${hdgst:-false}, 00:21:17.218 "ddgst": ${ddgst:-false} 00:21:17.218 }, 00:21:17.218 "method": "bdev_nvme_attach_controller" 00:21:17.218 } 00:21:17.218 EOF 00:21:17.218 )") 00:21:17.218 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:17.219 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:17.219 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:17.219 15:57:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme1", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme2", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme3", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme4", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme5", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme6", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme7", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme8", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme9", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 },{ 00:21:17.219 "params": { 00:21:17.219 "name": "Nvme10", 00:21:17.219 "trtype": "tcp", 00:21:17.219 "traddr": "10.0.0.2", 00:21:17.219 "adrfam": "ipv4", 00:21:17.219 "trsvcid": "4420", 00:21:17.219 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:17.219 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:17.219 "hdgst": false, 00:21:17.219 "ddgst": false 00:21:17.219 }, 00:21:17.219 "method": "bdev_nvme_attach_controller" 00:21:17.219 }' 00:21:17.219 [2024-07-12 15:57:46.717242] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:17.219 [2024-07-12 15:57:46.717349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:17.219 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.219 [2024-07-12 15:57:46.780276] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.219 [2024-07-12 15:57:46.891741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 71541 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:19.116 15:57:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:20.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 71541 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 71367 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.049 { 00:21:20.049 "params": { 00:21:20.049 "name": "Nvme$subsystem", 00:21:20.049 "trtype": "$TEST_TRANSPORT", 00:21:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.049 "adrfam": "ipv4", 00:21:20.049 "trsvcid": "$NVMF_PORT", 00:21:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.049 "hdgst": ${hdgst:-false}, 00:21:20.049 "ddgst": ${ddgst:-false} 00:21:20.049 }, 00:21:20.049 "method": "bdev_nvme_attach_controller" 00:21:20.049 } 00:21:20.049 EOF 00:21:20.049 )") 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.049 { 00:21:20.049 "params": { 00:21:20.049 "name": "Nvme$subsystem", 00:21:20.049 "trtype": "$TEST_TRANSPORT", 00:21:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.049 "adrfam": "ipv4", 00:21:20.049 "trsvcid": "$NVMF_PORT", 00:21:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.049 "hdgst": ${hdgst:-false}, 00:21:20.049 "ddgst": ${ddgst:-false} 00:21:20.049 }, 00:21:20.049 "method": "bdev_nvme_attach_controller" 00:21:20.049 } 00:21:20.049 EOF 00:21:20.049 )") 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.049 { 00:21:20.049 "params": { 00:21:20.049 "name": "Nvme$subsystem", 00:21:20.049 "trtype": "$TEST_TRANSPORT", 00:21:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.049 "adrfam": "ipv4", 00:21:20.049 "trsvcid": "$NVMF_PORT", 00:21:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.049 "hdgst": ${hdgst:-false}, 00:21:20.049 "ddgst": ${ddgst:-false} 00:21:20.049 }, 00:21:20.049 "method": "bdev_nvme_attach_controller" 00:21:20.049 } 00:21:20.049 EOF 00:21:20.049 )") 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.049 { 00:21:20.049 "params": { 00:21:20.049 "name": "Nvme$subsystem", 00:21:20.049 "trtype": "$TEST_TRANSPORT", 00:21:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.049 "adrfam": "ipv4", 00:21:20.049 "trsvcid": "$NVMF_PORT", 00:21:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.049 "hdgst": ${hdgst:-false}, 00:21:20.049 "ddgst": ${ddgst:-false} 00:21:20.049 }, 00:21:20.049 "method": "bdev_nvme_attach_controller" 00:21:20.049 } 00:21:20.049 EOF 00:21:20.049 )") 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.049 { 00:21:20.049 "params": { 00:21:20.049 "name": "Nvme$subsystem", 00:21:20.049 "trtype": "$TEST_TRANSPORT", 00:21:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.049 "adrfam": "ipv4", 00:21:20.049 "trsvcid": "$NVMF_PORT", 00:21:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.049 "hdgst": ${hdgst:-false}, 00:21:20.049 "ddgst": ${ddgst:-false} 00:21:20.049 }, 00:21:20.049 "method": "bdev_nvme_attach_controller" 00:21:20.049 } 00:21:20.049 EOF 00:21:20.049 )") 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.049 { 00:21:20.049 "params": { 00:21:20.049 "name": "Nvme$subsystem", 00:21:20.049 "trtype": "$TEST_TRANSPORT", 00:21:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.049 "adrfam": "ipv4", 00:21:20.049 "trsvcid": "$NVMF_PORT", 00:21:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.049 "hdgst": ${hdgst:-false}, 00:21:20.049 "ddgst": ${ddgst:-false} 00:21:20.049 }, 00:21:20.049 "method": "bdev_nvme_attach_controller" 00:21:20.049 } 00:21:20.049 EOF 00:21:20.049 )") 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.049 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.049 { 00:21:20.049 "params": { 00:21:20.049 "name": "Nvme$subsystem", 00:21:20.049 "trtype": "$TEST_TRANSPORT", 00:21:20.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.049 "adrfam": "ipv4", 00:21:20.049 "trsvcid": "$NVMF_PORT", 00:21:20.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.050 "hdgst": ${hdgst:-false}, 00:21:20.050 "ddgst": ${ddgst:-false} 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 } 00:21:20.050 EOF 00:21:20.050 )") 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.050 { 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme$subsystem", 00:21:20.050 "trtype": "$TEST_TRANSPORT", 00:21:20.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "$NVMF_PORT", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.050 "hdgst": ${hdgst:-false}, 00:21:20.050 "ddgst": ${ddgst:-false} 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 } 00:21:20.050 EOF 00:21:20.050 )") 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.050 { 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme$subsystem", 00:21:20.050 "trtype": "$TEST_TRANSPORT", 00:21:20.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "$NVMF_PORT", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.050 "hdgst": ${hdgst:-false}, 00:21:20.050 "ddgst": ${ddgst:-false} 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 } 00:21:20.050 EOF 00:21:20.050 )") 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:20.050 { 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme$subsystem", 00:21:20.050 "trtype": "$TEST_TRANSPORT", 00:21:20.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "$NVMF_PORT", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:20.050 "hdgst": ${hdgst:-false}, 00:21:20.050 "ddgst": ${ddgst:-false} 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 } 00:21:20.050 EOF 00:21:20.050 )") 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:20.050 15:57:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme1", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme2", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme3", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme4", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme5", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme6", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme7", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme8", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme9", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 },{ 00:21:20.050 "params": { 00:21:20.050 "name": "Nvme10", 00:21:20.050 "trtype": "tcp", 00:21:20.050 "traddr": "10.0.0.2", 00:21:20.050 "adrfam": "ipv4", 00:21:20.050 "trsvcid": "4420", 00:21:20.050 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:20.050 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:20.050 "hdgst": false, 00:21:20.050 "ddgst": false 00:21:20.050 }, 00:21:20.050 "method": "bdev_nvme_attach_controller" 00:21:20.050 }' 00:21:20.050 [2024-07-12 15:57:49.751820] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:20.050 [2024-07-12 15:57:49.751906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71881 ] 00:21:20.309 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.309 [2024-07-12 15:57:49.819436] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.309 [2024-07-12 15:57:49.934520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.690 Running I/O for 1 seconds... 00:21:23.093 00:21:23.093 Latency(us) 00:21:23.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.093 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme1n1 : 1.14 224.66 14.04 0.00 0.00 281977.36 19320.98 253211.69 00:21:23.094 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme2n1 : 1.13 225.71 14.11 0.00 0.00 275899.35 22330.79 248551.35 00:21:23.094 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme3n1 : 1.10 233.72 14.61 0.00 0.00 261956.65 21262.79 253211.69 00:21:23.094 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme4n1 : 1.18 271.53 16.97 0.00 0.00 221229.66 18447.17 250104.79 00:21:23.094 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme5n1 : 1.22 210.36 13.15 0.00 0.00 273052.44 19612.25 279620.27 00:21:23.094 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme6n1 : 1.13 230.88 14.43 0.00 0.00 250795.02 1686.95 248551.35 00:21:23.094 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme7n1 : 1.17 223.86 13.99 0.00 0.00 251685.58 20680.25 236123.78 00:21:23.094 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme8n1 : 1.20 267.75 16.73 0.00 0.00 210322.77 19515.16 253211.69 00:21:23.094 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme9n1 : 1.18 216.67 13.54 0.00 0.00 256406.38 21262.79 284280.60 00:21:23.094 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:23.094 Verification LBA range: start 0x0 length 0x400 00:21:23.094 Nvme10n1 : 1.19 268.56 16.79 0.00 0.00 203359.69 4587.52 251658.24 00:21:23.094 =================================================================================================================== 00:21:23.094 Total : 2373.70 148.36 0.00 0.00 246101.89 1686.95 284280.60 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.351 rmmod nvme_tcp 00:21:23.351 rmmod nvme_fabrics 00:21:23.351 rmmod nvme_keyring 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 71367 ']' 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 71367 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 71367 ']' 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 71367 00:21:23.351 15:57:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:23.351 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.351 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71367 00:21:23.351 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.351 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.351 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71367' 00:21:23.351 killing process with pid 71367 00:21:23.351 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 71367 00:21:23.351 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 71367 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:23.915 15:57:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:26.450 00:21:26.450 real 0m12.110s 00:21:26.450 user 0m34.742s 00:21:26.450 sys 0m3.494s 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 ************************************ 00:21:26.450 END TEST nvmf_shutdown_tc1 00:21:26.450 ************************************ 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 ************************************ 00:21:26.450 START TEST nvmf_shutdown_tc2 00:21:26.450 ************************************ 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.450 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:26.450 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:26.451 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:26.451 Found net devices under 0000:09:00.0: cvl_0_0 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:26.451 Found net devices under 0000:09:00.1: cvl_0_1 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:21:26.451 00:21:26.451 --- 10.0.0.2 ping statistics --- 00:21:26.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.451 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:21:26.451 00:21:26.451 --- 10.0.0.1 ping statistics --- 00:21:26.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.451 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=72730 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 72730 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 72730 ']' 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.451 15:57:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.451 [2024-07-12 15:57:55.894775] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:26.451 [2024-07-12 15:57:55.894842] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.451 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.451 [2024-07-12 15:57:55.957252] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:26.451 [2024-07-12 15:57:56.062890] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.451 [2024-07-12 15:57:56.062939] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.451 [2024-07-12 15:57:56.062962] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.451 [2024-07-12 15:57:56.062972] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.451 [2024-07-12 15:57:56.062982] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.451 [2024-07-12 15:57:56.063070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.451 [2024-07-12 15:57:56.063183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.451 [2024-07-12 15:57:56.063275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:26.451 [2024-07-12 15:57:56.063281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.709 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:26.709 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:26.709 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:26.709 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:26.709 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.709 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.710 [2024-07-12 15:57:56.225189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.710 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:26.710 Malloc1 00:21:26.710 [2024-07-12 15:57:56.315052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.710 Malloc2 00:21:26.710 Malloc3 00:21:26.968 Malloc4 00:21:26.968 Malloc5 00:21:26.968 Malloc6 00:21:26.968 Malloc7 00:21:26.968 Malloc8 00:21:26.968 Malloc9 00:21:27.225 Malloc10 00:21:27.225 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.225 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:27.225 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.225 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.225 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=72905 00:21:27.225 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 72905 /var/tmp/bdevperf.sock 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 72905 ']' 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:27.226 { 00:21:27.226 "params": { 00:21:27.226 "name": "Nvme$subsystem", 00:21:27.226 "trtype": "$TEST_TRANSPORT", 00:21:27.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.226 "adrfam": "ipv4", 00:21:27.226 "trsvcid": "$NVMF_PORT", 00:21:27.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.226 "hdgst": ${hdgst:-false}, 00:21:27.226 "ddgst": ${ddgst:-false} 00:21:27.226 }, 00:21:27.226 "method": "bdev_nvme_attach_controller" 00:21:27.226 } 00:21:27.226 EOF 00:21:27.226 )") 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:27.226 15:57:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:27.226 "params": { 00:21:27.227 "name": "Nvme1", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme2", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme3", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme4", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme5", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme6", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme7", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme8", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme9", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 },{ 00:21:27.227 "params": { 00:21:27.227 "name": "Nvme10", 00:21:27.227 "trtype": "tcp", 00:21:27.227 "traddr": "10.0.0.2", 00:21:27.227 "adrfam": "ipv4", 00:21:27.227 "trsvcid": "4420", 00:21:27.227 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:27.227 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:27.227 "hdgst": false, 00:21:27.227 "ddgst": false 00:21:27.227 }, 00:21:27.227 "method": "bdev_nvme_attach_controller" 00:21:27.227 }' 00:21:27.227 [2024-07-12 15:57:56.839255] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:27.227 [2024-07-12 15:57:56.839354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72905 ] 00:21:27.227 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.227 [2024-07-12 15:57:56.902548] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.484 [2024-07-12 15:57:57.015111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.855 Running I/O for 10 seconds... 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.113 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.374 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.374 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:29.374 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:29.374 15:57:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 72905 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 72905 ']' 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 72905 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72905 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72905' 00:21:29.632 killing process with pid 72905 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 72905 00:21:29.632 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 72905 00:21:29.632 Received shutdown signal, test time was about 0.850571 seconds 00:21:29.632 00:21:29.632 Latency(us) 00:21:29.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.632 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme1n1 : 0.80 248.33 15.52 0.00 0.00 253888.16 1541.31 259425.47 00:21:29.632 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme2n1 : 0.79 241.91 15.12 0.00 0.00 253850.86 19806.44 251658.24 00:21:29.632 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme3n1 : 0.78 245.71 15.36 0.00 0.00 244606.55 17864.63 260978.92 00:21:29.632 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme4n1 : 0.79 242.51 15.16 0.00 0.00 240955.04 26020.22 260978.92 00:21:29.632 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme5n1 : 0.85 218.90 13.68 0.00 0.00 248329.50 21554.06 260978.92 00:21:29.632 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme6n1 : 0.81 238.41 14.90 0.00 0.00 234455.29 21068.61 264085.81 00:21:29.632 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme7n1 : 0.79 242.21 15.14 0.00 0.00 223490.15 18155.90 256318.58 00:21:29.632 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme8n1 : 0.77 248.31 15.52 0.00 0.00 211332.11 17670.45 257872.02 00:21:29.632 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme9n1 : 0.76 167.37 10.46 0.00 0.00 304664.27 37282.70 288940.94 00:21:29.632 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:29.632 Verification LBA range: start 0x0 length 0x400 00:21:29.632 Nvme10n1 : 0.77 166.15 10.38 0.00 0.00 297132.18 25049.32 296708.17 00:21:29.632 =================================================================================================================== 00:21:29.633 Total : 2259.81 141.24 0.00 0.00 247747.61 1541.31 296708.17 00:21:29.889 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 72730 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.261 rmmod nvme_tcp 00:21:31.261 rmmod nvme_fabrics 00:21:31.261 rmmod nvme_keyring 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 72730 ']' 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 72730 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 72730 ']' 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 72730 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72730 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72730' 00:21:31.261 killing process with pid 72730 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 72730 00:21:31.261 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 72730 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.520 15:58:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:34.065 00:21:34.065 real 0m7.572s 00:21:34.065 user 0m22.510s 00:21:34.065 sys 0m1.462s 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.065 ************************************ 00:21:34.065 END TEST nvmf_shutdown_tc2 00:21:34.065 ************************************ 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.065 ************************************ 00:21:34.065 START TEST nvmf_shutdown_tc3 00:21:34.065 ************************************ 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.065 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:34.066 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:34.066 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:34.066 Found net devices under 0000:09:00.0: cvl_0_0 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:34.066 Found net devices under 0000:09:00.1: cvl_0_1 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:21:34.066 00:21:34.066 --- 10.0.0.2 ping statistics --- 00:21:34.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.066 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:21:34.066 00:21:34.066 --- 10.0.0.1 ping statistics --- 00:21:34.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.066 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=73811 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 73811 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 73811 ']' 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.066 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.066 [2024-07-12 15:58:03.538550] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:34.066 [2024-07-12 15:58:03.538634] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.066 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.066 [2024-07-12 15:58:03.599189] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.066 [2024-07-12 15:58:03.703299] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.066 [2024-07-12 15:58:03.703372] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.067 [2024-07-12 15:58:03.703400] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.067 [2024-07-12 15:58:03.703411] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.067 [2024-07-12 15:58:03.703421] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.067 [2024-07-12 15:58:03.703502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.067 [2024-07-12 15:58:03.703568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.067 [2024-07-12 15:58:03.703635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:34.067 [2024-07-12 15:58:03.703638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.325 [2024-07-12 15:58:03.861046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.325 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.325 Malloc1 00:21:34.325 [2024-07-12 15:58:03.943157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.325 Malloc2 00:21:34.325 Malloc3 00:21:34.582 Malloc4 00:21:34.582 Malloc5 00:21:34.582 Malloc6 00:21:34.582 Malloc7 00:21:34.582 Malloc8 00:21:34.840 Malloc9 00:21:34.840 Malloc10 00:21:34.840 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.840 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:34.840 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:34.840 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.840 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=73879 00:21:34.840 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 73879 /var/tmp/bdevperf.sock 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 73879 ']' 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.841 { 00:21:34.841 "params": { 00:21:34.841 "name": "Nvme$subsystem", 00:21:34.841 "trtype": "$TEST_TRANSPORT", 00:21:34.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.841 "adrfam": "ipv4", 00:21:34.841 "trsvcid": "$NVMF_PORT", 00:21:34.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.841 "hdgst": ${hdgst:-false}, 00:21:34.841 "ddgst": ${ddgst:-false} 00:21:34.841 }, 00:21:34.841 "method": "bdev_nvme_attach_controller" 00:21:34.841 } 00:21:34.841 EOF 00:21:34.841 )") 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.841 { 00:21:34.841 "params": { 00:21:34.841 "name": "Nvme$subsystem", 00:21:34.841 "trtype": "$TEST_TRANSPORT", 00:21:34.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.841 "adrfam": "ipv4", 00:21:34.841 "trsvcid": "$NVMF_PORT", 00:21:34.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.841 "hdgst": ${hdgst:-false}, 00:21:34.841 "ddgst": ${ddgst:-false} 00:21:34.841 }, 00:21:34.841 "method": "bdev_nvme_attach_controller" 00:21:34.841 } 00:21:34.841 EOF 00:21:34.841 )") 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.841 { 00:21:34.841 "params": { 00:21:34.841 "name": "Nvme$subsystem", 00:21:34.841 "trtype": "$TEST_TRANSPORT", 00:21:34.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.841 "adrfam": "ipv4", 00:21:34.841 "trsvcid": "$NVMF_PORT", 00:21:34.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.841 "hdgst": ${hdgst:-false}, 00:21:34.841 "ddgst": ${ddgst:-false} 00:21:34.841 }, 00:21:34.841 "method": "bdev_nvme_attach_controller" 00:21:34.841 } 00:21:34.841 EOF 00:21:34.841 )") 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.841 { 00:21:34.841 "params": { 00:21:34.841 "name": "Nvme$subsystem", 00:21:34.841 "trtype": "$TEST_TRANSPORT", 00:21:34.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.841 "adrfam": "ipv4", 00:21:34.841 "trsvcid": "$NVMF_PORT", 00:21:34.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.841 "hdgst": ${hdgst:-false}, 00:21:34.841 "ddgst": ${ddgst:-false} 00:21:34.841 }, 00:21:34.841 "method": "bdev_nvme_attach_controller" 00:21:34.841 } 00:21:34.841 EOF 00:21:34.841 )") 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.841 { 00:21:34.841 "params": { 00:21:34.841 "name": "Nvme$subsystem", 00:21:34.841 "trtype": "$TEST_TRANSPORT", 00:21:34.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.841 "adrfam": "ipv4", 00:21:34.841 "trsvcid": "$NVMF_PORT", 00:21:34.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.841 "hdgst": ${hdgst:-false}, 00:21:34.841 "ddgst": ${ddgst:-false} 00:21:34.841 }, 00:21:34.841 "method": "bdev_nvme_attach_controller" 00:21:34.841 } 00:21:34.841 EOF 00:21:34.841 )") 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.841 { 00:21:34.841 "params": { 00:21:34.841 "name": "Nvme$subsystem", 00:21:34.841 "trtype": "$TEST_TRANSPORT", 00:21:34.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.841 "adrfam": "ipv4", 00:21:34.841 "trsvcid": "$NVMF_PORT", 00:21:34.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.841 "hdgst": ${hdgst:-false}, 00:21:34.841 "ddgst": ${ddgst:-false} 00:21:34.841 }, 00:21:34.841 "method": "bdev_nvme_attach_controller" 00:21:34.841 } 00:21:34.841 EOF 00:21:34.841 )") 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.841 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.842 { 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme$subsystem", 00:21:34.842 "trtype": "$TEST_TRANSPORT", 00:21:34.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "$NVMF_PORT", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.842 "hdgst": ${hdgst:-false}, 00:21:34.842 "ddgst": ${ddgst:-false} 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 } 00:21:34.842 EOF 00:21:34.842 )") 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.842 { 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme$subsystem", 00:21:34.842 "trtype": "$TEST_TRANSPORT", 00:21:34.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "$NVMF_PORT", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.842 "hdgst": ${hdgst:-false}, 00:21:34.842 "ddgst": ${ddgst:-false} 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 } 00:21:34.842 EOF 00:21:34.842 )") 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.842 { 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme$subsystem", 00:21:34.842 "trtype": "$TEST_TRANSPORT", 00:21:34.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "$NVMF_PORT", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.842 "hdgst": ${hdgst:-false}, 00:21:34.842 "ddgst": ${ddgst:-false} 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 } 00:21:34.842 EOF 00:21:34.842 )") 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:34.842 { 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme$subsystem", 00:21:34.842 "trtype": "$TEST_TRANSPORT", 00:21:34.842 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "$NVMF_PORT", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.842 "hdgst": ${hdgst:-false}, 00:21:34.842 "ddgst": ${ddgst:-false} 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 } 00:21:34.842 EOF 00:21:34.842 )") 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:34.842 15:58:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme1", 00:21:34.842 "trtype": "tcp", 00:21:34.842 "traddr": "10.0.0.2", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "4420", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.842 "hdgst": false, 00:21:34.842 "ddgst": false 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 },{ 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme2", 00:21:34.842 "trtype": "tcp", 00:21:34.842 "traddr": "10.0.0.2", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "4420", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.842 "hdgst": false, 00:21:34.842 "ddgst": false 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 },{ 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme3", 00:21:34.842 "trtype": "tcp", 00:21:34.842 "traddr": "10.0.0.2", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "4420", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.842 "hdgst": false, 00:21:34.842 "ddgst": false 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 },{ 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme4", 00:21:34.842 "trtype": "tcp", 00:21:34.842 "traddr": "10.0.0.2", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "4420", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.842 "hdgst": false, 00:21:34.842 "ddgst": false 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 },{ 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme5", 00:21:34.842 "trtype": "tcp", 00:21:34.842 "traddr": "10.0.0.2", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "4420", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.842 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.842 "hdgst": false, 00:21:34.842 "ddgst": false 00:21:34.842 }, 00:21:34.842 "method": "bdev_nvme_attach_controller" 00:21:34.842 },{ 00:21:34.842 "params": { 00:21:34.842 "name": "Nvme6", 00:21:34.842 "trtype": "tcp", 00:21:34.842 "traddr": "10.0.0.2", 00:21:34.842 "adrfam": "ipv4", 00:21:34.842 "trsvcid": "4420", 00:21:34.842 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.843 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.843 "hdgst": false, 00:21:34.843 "ddgst": false 00:21:34.843 }, 00:21:34.843 "method": "bdev_nvme_attach_controller" 00:21:34.843 },{ 00:21:34.843 "params": { 00:21:34.843 "name": "Nvme7", 00:21:34.843 "trtype": "tcp", 00:21:34.843 "traddr": "10.0.0.2", 00:21:34.843 "adrfam": "ipv4", 00:21:34.843 "trsvcid": "4420", 00:21:34.843 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.843 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.843 "hdgst": false, 00:21:34.843 "ddgst": false 00:21:34.843 }, 00:21:34.843 "method": "bdev_nvme_attach_controller" 00:21:34.843 },{ 00:21:34.843 "params": { 00:21:34.843 "name": "Nvme8", 00:21:34.843 "trtype": "tcp", 00:21:34.843 "traddr": "10.0.0.2", 00:21:34.843 "adrfam": "ipv4", 00:21:34.843 "trsvcid": "4420", 00:21:34.843 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.843 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.843 "hdgst": false, 00:21:34.843 "ddgst": false 00:21:34.843 }, 00:21:34.843 "method": "bdev_nvme_attach_controller" 00:21:34.843 },{ 00:21:34.843 "params": { 00:21:34.843 "name": "Nvme9", 00:21:34.843 "trtype": "tcp", 00:21:34.843 "traddr": "10.0.0.2", 00:21:34.843 "adrfam": "ipv4", 00:21:34.843 "trsvcid": "4420", 00:21:34.843 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.843 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.843 "hdgst": false, 00:21:34.843 "ddgst": false 00:21:34.843 }, 00:21:34.843 "method": "bdev_nvme_attach_controller" 00:21:34.843 },{ 00:21:34.843 "params": { 00:21:34.843 "name": "Nvme10", 00:21:34.843 "trtype": "tcp", 00:21:34.843 "traddr": "10.0.0.2", 00:21:34.843 "adrfam": "ipv4", 00:21:34.843 "trsvcid": "4420", 00:21:34.843 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.843 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.843 "hdgst": false, 00:21:34.843 "ddgst": false 00:21:34.843 }, 00:21:34.843 "method": "bdev_nvme_attach_controller" 00:21:34.843 }' 00:21:34.843 [2024-07-12 15:58:04.467486] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:34.843 [2024-07-12 15:58:04.467562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73879 ] 00:21:34.843 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.843 [2024-07-12 15:58:04.530255] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.100 [2024-07-12 15:58:04.643012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.473 Running I/O for 10 seconds... 00:21:36.731 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.731 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:36.731 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:36.731 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.731 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:36.989 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 73811 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 73811 ']' 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 73811 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73811 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73811' 00:21:37.264 killing process with pid 73811 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 73811 00:21:37.264 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 73811 00:21:37.264 [2024-07-12 15:58:06.845997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.264 [2024-07-12 15:58:06.846256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.846974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360da0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849636] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849648] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.265 [2024-07-12 15:58:06.849716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.849986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13639e0 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851847] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.851999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.852508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361280 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.854487] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.266 [2024-07-12 15:58:06.854706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.854739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.266 [2024-07-12 15:58:06.854754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.854961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.855709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.856068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.856105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.856122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.856134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.856146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.856159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.856171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361760 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.857077] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.267 [2024-07-12 15:58:06.857190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcaca0 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.857417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b6c0 is same with the state(5) to be set 00:21:37.267 [2024-07-12 15:58:06.857609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.267 [2024-07-12 15:58:06.857712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.267 [2024-07-12 15:58:06.857724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffab0 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.857768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.268 [2024-07-12 15:58:06.857789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.268 [2024-07-12 15:58:06.857805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.268 [2024-07-12 15:58:06.857818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.268 [2024-07-12 15:58:06.857833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.268 [2024-07-12 15:58:06.857847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.268 [2024-07-12 15:58:06.857861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.268 [2024-07-12 15:58:06.857875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.268 [2024-07-12 15:58:06.857889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcb4c0 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.857998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858754] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.858893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361c60 is same with the state(5) to be set 00:21:37.268 [2024-07-12 15:58:06.859339] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.268 [2024-07-12 15:58:06.860634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.860762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.860788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.860815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.860828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.860849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.860863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.860878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.860896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.860911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.860924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with [2024-07-12 15:58:06.860936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:1the state(5) to be set 00:21:37.269 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.860953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with [2024-07-12 15:58:06.860953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:37.269 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.860969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.860982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.860988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.860994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:1[2024-07-12 15:58:06.861007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with [2024-07-12 15:58:06.861021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:37.269 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.861056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with [2024-07-12 15:58:06.861058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:37.269 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.861093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:1[2024-07-12 15:58:06.861111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with [2024-07-12 15:58:06.861126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:37.269 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.861154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.861180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:06.861192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.861219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128[2024-07-12 15:58:06.861244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:06.861275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.861303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128[2024-07-12 15:58:06.861352] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with [2024-07-12 15:58:06.861374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:37.269 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 [2024-07-12 15:58:06.861402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.269 [2024-07-12 15:58:06.861414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.269 [2024-07-12 15:58:06.861423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128[2024-07-12 15:58:06.861426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.269 the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with [2024-07-12 15:58:06.861439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:37.270 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128[2024-07-12 15:58:06.861491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128[2024-07-12 15:58:06.861558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:06.861572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362640 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 the state(5) to be set 00:21:37.270 [2024-07-12 15:58:06.861603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.861978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.861993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.270 [2024-07-12 15:58:06.862596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.270 [2024-07-12 15:58:06.862610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.271 [2024-07-12 15:58:06.862853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.271 [2024-07-12 15:58:06.862942] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bfafa0 was disconnected and freed. reset controller. 00:21:37.271 [2024-07-12 15:58:06.863097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863126] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863199] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.863990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.864198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362b20 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.271 [2024-07-12 15:58:06.865446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865482] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controlle[2024-07-12 15:58:06.865777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with r 00:21:37.272 the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with [2024-07-12 15:58:06.865839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b7b0 (9): the state(5) to be set 00:21:37.272 Bad file descriptor 00:21:37.272 [2024-07-12 15:58:06.865858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865918] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.272 [2024-07-12 15:58:06.865934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.865989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363020 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866741] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.272 [2024-07-12 15:58:06.866963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.866994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867256] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.272 [2024-07-12 15:58:06.867448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363500 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.273 [2024-07-12 15:58:06.867898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b7b0 with addr=10.0.0.2, port=4420 00:21:37.273 [2024-07-12 15:58:06.867916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b7b0 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.867958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.867984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c23d70 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.868135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d6d0 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.868298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3c060 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.868464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcaca0 (9): Bad file descriptor 00:21:37.273 [2024-07-12 15:58:06.868519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db4800 is same with the state(5) to be set 00:21:37.273 [2024-07-12 15:58:06.868671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b6c0 (9): Bad file descriptor 00:21:37.273 [2024-07-12 15:58:06.868701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffab0 (9): Bad file descriptor 00:21:37.273 [2024-07-12 15:58:06.868730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcb4c0 (9): Bad file descriptor 00:21:37.273 [2024-07-12 15:58:06.868775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.273 [2024-07-12 15:58:06.868795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.273 [2024-07-12 15:58:06.868810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.274 [2024-07-12 15:58:06.868824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.868839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.274 [2024-07-12 15:58:06.868853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.868881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.274 [2024-07-12 15:58:06.868894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.868908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cce120 is same with the state(5) to be set 00:21:37.274 [2024-07-12 15:58:06.869058] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.274 [2024-07-12 15:58:06.869800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b7b0 (9): Bad file descriptor 00:21:37.274 [2024-07-12 15:58:06.869971] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.274 [2024-07-12 15:58:06.870241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:37.274 [2024-07-12 15:58:06.870269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:37.274 [2024-07-12 15:58:06.870286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:37.274 [2024-07-12 15:58:06.870541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.274 [2024-07-12 15:58:06.870643] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.274 [2024-07-12 15:58:06.870813] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:37.274 [2024-07-12 15:58:06.876846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:37.274 [2024-07-12 15:58:06.877167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.274 [2024-07-12 15:58:06.877198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b7b0 with addr=10.0.0.2, port=4420 00:21:37.274 [2024-07-12 15:58:06.877217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b7b0 is same with the state(5) to be set 00:21:37.274 [2024-07-12 15:58:06.877287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b7b0 (9): Bad file descriptor 00:21:37.274 [2024-07-12 15:58:06.877363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:37.274 [2024-07-12 15:58:06.877382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:37.274 [2024-07-12 15:58:06.877397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:37.274 [2024-07-12 15:58:06.877463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.274 [2024-07-12 15:58:06.877769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c23d70 (9): Bad file descriptor 00:21:37.274 [2024-07-12 15:58:06.877805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0d6d0 (9): Bad file descriptor 00:21:37.274 [2024-07-12 15:58:06.877836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3c060 (9): Bad file descriptor 00:21:37.274 [2024-07-12 15:58:06.877873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db4800 (9): Bad file descriptor 00:21:37.274 [2024-07-12 15:58:06.877924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cce120 (9): Bad file descriptor 00:21:37.274 [2024-07-12 15:58:06.878072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.878982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.878996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.879013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.879027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.274 [2024-07-12 15:58:06.879044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.274 [2024-07-12 15:58:06.879058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.879973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.879988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.880004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.880018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.880035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.880049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.880065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.880079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.880095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.880109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.880126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.880140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.880154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db3870 is same with the state(5) to be set 00:21:37.275 [2024-07-12 15:58:06.881461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.881485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.881506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.881521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.881538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.881552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.881590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.881605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.881622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.881636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.881651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.881666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.881681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.275 [2024-07-12 15:58:06.881696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.275 [2024-07-12 15:58:06.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.881974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.881990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.882985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.882998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.883015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.276 [2024-07-12 15:58:06.883029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.276 [2024-07-12 15:58:06.883044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.883471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.883485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d96ac0 is same with the state(5) to be set 00:21:37.277 [2024-07-12 15:58:06.884748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.884798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.884831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.884861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.884891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.884921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.884952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.884982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.884997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.277 [2024-07-12 15:58:06.885417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.277 [2024-07-12 15:58:06.885433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.885973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.885987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.886011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.886026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.886043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.886056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.886072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.886086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.894959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.278 [2024-07-12 15:58:06.895681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.278 [2024-07-12 15:58:06.895700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d97f70 is same with the state(5) to be set 00:21:37.279 [2024-07-12 15:58:06.897099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.897973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.897988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.279 [2024-07-12 15:58:06.898419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.279 [2024-07-12 15:58:06.898435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.898980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.898993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.899009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.899024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.899048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.899062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.899078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.899093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.899107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dacde0 is same with the state(5) to be set 00:21:37.280 [2024-07-12 15:58:06.900743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.280 [2024-07-12 15:58:06.900776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:37.280 [2024-07-12 15:58:06.900794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:37.280 [2024-07-12 15:58:06.900905] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.280 [2024-07-12 15:58:06.901032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:37.280 [2024-07-12 15:58:06.901353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.280 [2024-07-12 15:58:06.901393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffab0 with addr=10.0.0.2, port=4420 00:21:37.280 [2024-07-12 15:58:06.901410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffab0 is same with the state(5) to be set 00:21:37.280 [2024-07-12 15:58:06.901555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.280 [2024-07-12 15:58:06.901581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcb4c0 with addr=10.0.0.2, port=4420 00:21:37.280 [2024-07-12 15:58:06.901597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcb4c0 is same with the state(5) to be set 00:21:37.280 [2024-07-12 15:58:06.901730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.280 [2024-07-12 15:58:06.901756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2b6c0 with addr=10.0.0.2, port=4420 00:21:37.280 [2024-07-12 15:58:06.901771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b6c0 is same with the state(5) to be set 00:21:37.280 [2024-07-12 15:58:06.902622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.902985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.902999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.903014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.903028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.903044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.280 [2024-07-12 15:58:06.903058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.280 [2024-07-12 15:58:06.903074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.903975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.903989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.904005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.904019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.904035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.904048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.904064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.904078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.904094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.904108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.904123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.904137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.904154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.904168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.281 [2024-07-12 15:58:06.904184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.281 [2024-07-12 15:58:06.904198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.904605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.904619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf9ad0 is same with the state(5) to be set 00:21:37.282 [2024-07-12 15:58:06.905894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.905919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.905940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.905956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.905973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.905993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.282 [2024-07-12 15:58:06.906791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.282 [2024-07-12 15:58:06.906805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.906821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.906835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.906851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.906865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.906881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.906895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.906912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.906926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.906943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.906957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.906973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.906987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.907884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.907899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23859f0 is same with the state(5) to be set 00:21:37.283 [2024-07-12 15:58:06.909155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.909182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.909208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.909224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.909241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.909256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.909272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.909286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.909302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.909323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.909341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.909355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.283 [2024-07-12 15:58:06.909371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.283 [2024-07-12 15:58:06.909386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.909972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.909990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.284 [2024-07-12 15:58:06.910709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.284 [2024-07-12 15:58:06.910723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.910974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.910990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.911004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.911020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.911034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.911050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.911065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.911081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.911095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.911126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.911142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.911160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.911175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252d410 is same with the state(5) to be set 00:21:37.285 [2024-07-12 15:58:06.912403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.912976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.912990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.913006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.913020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.913036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.285 [2024-07-12 15:58:06.913050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.285 [2024-07-12 15:58:06.913067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.913971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.913985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.286 [2024-07-12 15:58:06.914379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.286 [2024-07-12 15:58:06.914393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.914409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d4f50 is same with the state(5) to be set 00:21:37.287 [2024-07-12 15:58:06.915635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.915979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.915995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.287 [2024-07-12 15:58:06.916941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.287 [2024-07-12 15:58:06.916956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.916972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.916987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.288 [2024-07-12 15:58:06.917638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.288 [2024-07-12 15:58:06.917653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dab8e0 is same with the state(5) to be set 00:21:37.288 [2024-07-12 15:58:06.920606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:37.288 [2024-07-12 15:58:06.920646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:37.288 [2024-07-12 15:58:06.920666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:37.288 [2024-07-12 15:58:06.920683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:37.288 [2024-07-12 15:58:06.920978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.288 [2024-07-12 15:58:06.921008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcaca0 with addr=10.0.0.2, port=4420 00:21:37.288 [2024-07-12 15:58:06.921026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcaca0 is same with the state(5) to be set 00:21:37.288 [2024-07-12 15:58:06.921054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffab0 (9): Bad file descriptor 00:21:37.288 [2024-07-12 15:58:06.921075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcb4c0 (9): Bad file descriptor 00:21:37.288 [2024-07-12 15:58:06.921094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b6c0 (9): Bad file descriptor 00:21:37.288 [2024-07-12 15:58:06.921152] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.288 [2024-07-12 15:58:06.921177] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.288 [2024-07-12 15:58:06.921198] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.288 [2024-07-12 15:58:06.921216] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.288 [2024-07-12 15:58:06.921235] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.288 [2024-07-12 15:58:06.921254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcaca0 (9): Bad file descriptor 00:21:37.288 [2024-07-12 15:58:06.921380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:37.288 task offset: 22912 on job bdev=Nvme5n1 fails 00:21:37.288 00:21:37.288 Latency(us) 00:21:37.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.288 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme1n1 ended in about 0.93 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme1n1 : 0.93 137.54 8.60 68.77 0.00 306860.69 28156.21 293601.28 00:21:37.288 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme2n1 ended in about 0.93 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme2n1 : 0.93 137.05 8.57 68.53 0.00 301879.50 24758.04 318456.41 00:21:37.288 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme3n1 ended in about 0.95 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme3n1 : 0.95 135.29 8.46 67.64 0.00 299862.09 45049.93 292047.83 00:21:37.288 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme4n1 ended in about 0.96 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme4n1 : 0.96 134.03 8.38 67.01 0.00 296663.42 18738.44 315349.52 00:21:37.288 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme5n1 : 0.91 139.99 8.75 70.00 0.00 277105.40 4320.52 351078.78 00:21:37.288 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme6n1 ended in about 0.96 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme6n1 : 0.96 133.57 8.35 66.79 0.00 285666.16 19515.16 293601.28 00:21:37.288 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme7n1 ended in about 0.96 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme7n1 : 0.96 133.12 8.32 66.56 0.00 280508.49 20291.89 293601.28 00:21:37.288 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme8n1 ended in about 0.96 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme8n1 : 0.96 132.67 8.29 66.34 0.00 275617.06 16505.36 309135.74 00:21:37.288 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme9n1 ended in about 0.97 seconds with error 00:21:37.288 Verification LBA range: start 0x0 length 0x400 00:21:37.288 Nvme9n1 : 0.97 132.23 8.26 66.12 0.00 270706.16 22233.69 278066.82 00:21:37.288 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.288 Job: Nvme10n1 ended in about 0.95 seconds with error 00:21:37.290 Verification LBA range: start 0x0 length 0x400 00:21:37.290 Nvme10n1 : 0.95 134.81 8.43 67.40 0.00 258566.07 20680.25 257872.02 00:21:37.290 =================================================================================================================== 00:21:37.290 Total : 1350.31 84.39 675.15 0.00 285343.50 4320.52 351078.78 00:21:37.290 [2024-07-12 15:58:06.950541] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:37.290 [2024-07-12 15:58:06.950623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:37.290 [2024-07-12 15:58:06.950959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.290 [2024-07-12 15:58:06.950997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b7b0 with addr=10.0.0.2, port=4420 00:21:37.290 [2024-07-12 15:58:06.951018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b7b0 is same with the state(5) to be set 00:21:37.290 [2024-07-12 15:58:06.951166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.290 [2024-07-12 15:58:06.951194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c23d70 with addr=10.0.0.2, port=4420 00:21:37.290 [2024-07-12 15:58:06.951221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c23d70 is same with the state(5) to be set 00:21:37.290 [2024-07-12 15:58:06.951349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.290 [2024-07-12 15:58:06.951377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c0d6d0 with addr=10.0.0.2, port=4420 00:21:37.290 [2024-07-12 15:58:06.951394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c0d6d0 is same with the state(5) to be set 00:21:37.290 [2024-07-12 15:58:06.951536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.290 [2024-07-12 15:58:06.951562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cce120 with addr=10.0.0.2, port=4420 00:21:37.290 [2024-07-12 15:58:06.951578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cce120 is same with the state(5) to be set 00:21:37.290 [2024-07-12 15:58:06.951598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.290 [2024-07-12 15:58:06.951612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.290 [2024-07-12 15:58:06.951628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.291 [2024-07-12 15:58:06.951657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.951673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.951686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:37.291 [2024-07-12 15:58:06.951705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.951718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.951732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:37.291 [2024-07-12 15:58:06.953173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.953200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.953213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.953394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.291 [2024-07-12 15:58:06.953421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3c060 with addr=10.0.0.2, port=4420 00:21:37.291 [2024-07-12 15:58:06.953438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3c060 is same with the state(5) to be set 00:21:37.291 [2024-07-12 15:58:06.953566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.291 [2024-07-12 15:58:06.953592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db4800 with addr=10.0.0.2, port=4420 00:21:37.291 [2024-07-12 15:58:06.953607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db4800 is same with the state(5) to be set 00:21:37.291 [2024-07-12 15:58:06.953633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b7b0 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.953655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c23d70 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.953673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c0d6d0 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.953691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cce120 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.953708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.953726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.953740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:37.291 [2024-07-12 15:58:06.953814] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.291 [2024-07-12 15:58:06.953839] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.291 [2024-07-12 15:58:06.953858] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.291 [2024-07-12 15:58:06.953876] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.291 [2024-07-12 15:58:06.953894] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:37.291 [2024-07-12 15:58:06.953979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.954019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3c060 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.954041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db4800 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.954058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.954070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.954084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:37.291 [2024-07-12 15:58:06.954103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.954118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.954131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:37.291 [2024-07-12 15:58:06.954148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.954162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.954176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:37.291 [2024-07-12 15:58:06.954191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.954205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.954219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:37.291 [2024-07-12 15:58:06.954302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:37.291 [2024-07-12 15:58:06.954334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:37.291 [2024-07-12 15:58:06.954352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.291 [2024-07-12 15:58:06.954367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.954380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.954392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.954403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.954435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.954456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.954471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:37.291 [2024-07-12 15:58:06.954487] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.954501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.954514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:37.291 [2024-07-12 15:58:06.954553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.954571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.954695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.291 [2024-07-12 15:58:06.954721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2b6c0 with addr=10.0.0.2, port=4420 00:21:37.291 [2024-07-12 15:58:06.954739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b6c0 is same with the state(5) to be set 00:21:37.291 [2024-07-12 15:58:06.954866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.291 [2024-07-12 15:58:06.954891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcb4c0 with addr=10.0.0.2, port=4420 00:21:37.291 [2024-07-12 15:58:06.954907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcb4c0 is same with the state(5) to be set 00:21:37.291 [2024-07-12 15:58:06.955032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:37.291 [2024-07-12 15:58:06.955057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bffab0 with addr=10.0.0.2, port=4420 00:21:37.291 [2024-07-12 15:58:06.955073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffab0 is same with the state(5) to be set 00:21:37.291 [2024-07-12 15:58:06.955117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b6c0 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.955141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcb4c0 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.955160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bffab0 (9): Bad file descriptor 00:21:37.291 [2024-07-12 15:58:06.955202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.955222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.955236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:37.291 [2024-07-12 15:58:06.955253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.955267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.955280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:37.291 [2024-07-12 15:58:06.955296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:37.291 [2024-07-12 15:58:06.955309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:37.291 [2024-07-12 15:58:06.955334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.291 [2024-07-12 15:58:06.955373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.955391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.291 [2024-07-12 15:58:06.955404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:37.866 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:37.866 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 73879 00:21:38.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (73879) - No such process 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.802 rmmod nvme_tcp 00:21:38.802 rmmod nvme_fabrics 00:21:38.802 rmmod nvme_keyring 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.802 15:58:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.343 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:41.343 00:21:41.343 real 0m7.270s 00:21:41.343 user 0m16.865s 00:21:41.343 sys 0m1.492s 00:21:41.343 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.343 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 ************************************ 00:21:41.343 END TEST nvmf_shutdown_tc3 00:21:41.343 ************************************ 00:21:41.343 15:58:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:41.343 15:58:10 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:41.343 00:21:41.343 real 0m27.173s 00:21:41.343 user 1m14.204s 00:21:41.343 sys 0m6.596s 00:21:41.343 15:58:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.343 15:58:10 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 ************************************ 00:21:41.343 END TEST nvmf_shutdown 00:21:41.343 ************************************ 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:41.343 15:58:10 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 15:58:10 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 15:58:10 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:41.343 15:58:10 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.343 15:58:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:41.343 ************************************ 00:21:41.343 START TEST nvmf_multicontroller 00:21:41.343 ************************************ 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:41.343 * Looking for test storage... 00:21:41.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.343 15:58:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:43.274 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:43.274 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:43.274 Found net devices under 0000:09:00.0: cvl_0_0 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:43.274 Found net devices under 0000:09:00.1: cvl_0_1 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:21:43.274 00:21:43.274 --- 10.0.0.2 ping statistics --- 00:21:43.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.274 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:21:43.274 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:21:43.274 00:21:43.274 --- 10.0.0.1 ping statistics --- 00:21:43.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.274 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=76386 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 76386 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 76386 ']' 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.275 15:58:12 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.532 [2024-07-12 15:58:13.036736] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:43.532 [2024-07-12 15:58:13.036817] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.532 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.532 [2024-07-12 15:58:13.102045] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:43.532 [2024-07-12 15:58:13.217160] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.532 [2024-07-12 15:58:13.217214] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.532 [2024-07-12 15:58:13.217229] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.532 [2024-07-12 15:58:13.217240] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.532 [2024-07-12 15:58:13.217250] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.532 [2024-07-12 15:58:13.217347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.533 [2024-07-12 15:58:13.217409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:43.533 [2024-07-12 15:58:13.217412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 [2024-07-12 15:58:13.369169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 Malloc0 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 [2024-07-12 15:58:13.440195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 [2024-07-12 15:58:13.448084] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 Malloc1 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=76416 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 76416 /var/tmp/bdevperf.sock 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 76416 ']' 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.790 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.355 NVMe0n1 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.355 1 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.355 request: 00:21:44.355 { 00:21:44.355 "name": "NVMe0", 00:21:44.355 "trtype": "tcp", 00:21:44.355 "traddr": "10.0.0.2", 00:21:44.355 "adrfam": "ipv4", 00:21:44.355 "trsvcid": "4420", 00:21:44.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.355 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:44.355 "hostaddr": "10.0.0.2", 00:21:44.355 "hostsvcid": "60000", 00:21:44.355 "prchk_reftag": false, 00:21:44.355 "prchk_guard": false, 00:21:44.355 "hdgst": false, 00:21:44.355 "ddgst": false, 00:21:44.355 "method": "bdev_nvme_attach_controller", 00:21:44.355 "req_id": 1 00:21:44.355 } 00:21:44.355 Got JSON-RPC error response 00:21:44.355 response: 00:21:44.355 { 00:21:44.355 "code": -114, 00:21:44.355 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:44.355 } 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.355 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.355 request: 00:21:44.355 { 00:21:44.356 "name": "NVMe0", 00:21:44.356 "trtype": "tcp", 00:21:44.356 "traddr": "10.0.0.2", 00:21:44.356 "adrfam": "ipv4", 00:21:44.356 "trsvcid": "4420", 00:21:44.356 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:44.356 "hostaddr": "10.0.0.2", 00:21:44.356 "hostsvcid": "60000", 00:21:44.356 "prchk_reftag": false, 00:21:44.356 "prchk_guard": false, 00:21:44.356 "hdgst": false, 00:21:44.356 "ddgst": false, 00:21:44.356 "method": "bdev_nvme_attach_controller", 00:21:44.356 "req_id": 1 00:21:44.356 } 00:21:44.356 Got JSON-RPC error response 00:21:44.356 response: 00:21:44.356 { 00:21:44.356 "code": -114, 00:21:44.356 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:44.356 } 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.356 request: 00:21:44.356 { 00:21:44.356 "name": "NVMe0", 00:21:44.356 "trtype": "tcp", 00:21:44.356 "traddr": "10.0.0.2", 00:21:44.356 "adrfam": "ipv4", 00:21:44.356 "trsvcid": "4420", 00:21:44.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.356 "hostaddr": "10.0.0.2", 00:21:44.356 "hostsvcid": "60000", 00:21:44.356 "prchk_reftag": false, 00:21:44.356 "prchk_guard": false, 00:21:44.356 "hdgst": false, 00:21:44.356 "ddgst": false, 00:21:44.356 "multipath": "disable", 00:21:44.356 "method": "bdev_nvme_attach_controller", 00:21:44.356 "req_id": 1 00:21:44.356 } 00:21:44.356 Got JSON-RPC error response 00:21:44.356 response: 00:21:44.356 { 00:21:44.356 "code": -114, 00:21:44.356 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:44.356 } 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.356 request: 00:21:44.356 { 00:21:44.356 "name": "NVMe0", 00:21:44.356 "trtype": "tcp", 00:21:44.356 "traddr": "10.0.0.2", 00:21:44.356 "adrfam": "ipv4", 00:21:44.356 "trsvcid": "4420", 00:21:44.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.356 "hostaddr": "10.0.0.2", 00:21:44.356 "hostsvcid": "60000", 00:21:44.356 "prchk_reftag": false, 00:21:44.356 "prchk_guard": false, 00:21:44.356 "hdgst": false, 00:21:44.356 "ddgst": false, 00:21:44.356 "multipath": "failover", 00:21:44.356 "method": "bdev_nvme_attach_controller", 00:21:44.356 "req_id": 1 00:21:44.356 } 00:21:44.356 Got JSON-RPC error response 00:21:44.356 response: 00:21:44.356 { 00:21:44.356 "code": -114, 00:21:44.356 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:44.356 } 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:44.356 15:58:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.356 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.356 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.613 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:44.614 15:58:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.986 0 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 76416 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 76416 ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 76416 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76416 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76416' 00:21:45.986 killing process with pid 76416 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 76416 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 76416 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:45.986 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:45.986 [2024-07-12 15:58:13.555209] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:45.986 [2024-07-12 15:58:13.555292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76416 ] 00:21:45.986 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.986 [2024-07-12 15:58:13.615115] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.986 [2024-07-12 15:58:13.727711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.986 [2024-07-12 15:58:14.140846] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 618278e1-24f3-4ad1-be0c-bad5f92c18ce already exists 00:21:45.986 [2024-07-12 15:58:14.140884] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:618278e1-24f3-4ad1-be0c-bad5f92c18ce alias for bdev NVMe1n1 00:21:45.986 [2024-07-12 15:58:14.140899] bdev_nvme.c:4322:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:45.986 Running I/O for 1 seconds... 00:21:45.986 00:21:45.986 Latency(us) 00:21:45.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.986 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:45.986 NVMe0n1 : 1.01 17160.01 67.03 0.00 0.00 7427.54 4077.80 11650.84 00:21:45.986 =================================================================================================================== 00:21:45.986 Total : 17160.01 67.03 0.00 0.00 7427.54 4077.80 11650.84 00:21:45.986 Received shutdown signal, test time was about 1.000000 seconds 00:21:45.986 00:21:45.986 Latency(us) 00:21:45.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.986 =================================================================================================================== 00:21:45.986 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.986 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:45.986 rmmod nvme_tcp 00:21:45.986 rmmod nvme_fabrics 00:21:45.986 rmmod nvme_keyring 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 76386 ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 76386 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 76386 ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 76386 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76386 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76386' 00:21:45.986 killing process with pid 76386 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 76386 00:21:45.986 15:58:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 76386 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.553 15:58:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.457 15:58:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:48.457 00:21:48.457 real 0m7.398s 00:21:48.457 user 0m11.122s 00:21:48.457 sys 0m2.373s 00:21:48.457 15:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:48.457 15:58:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:48.457 ************************************ 00:21:48.457 END TEST nvmf_multicontroller 00:21:48.457 ************************************ 00:21:48.457 15:58:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:48.457 15:58:18 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:48.457 15:58:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:48.457 15:58:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.457 15:58:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:48.457 ************************************ 00:21:48.457 START TEST nvmf_aer 00:21:48.457 ************************************ 00:21:48.457 15:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:48.457 * Looking for test storage... 00:21:48.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.715 15:58:18 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:48.716 15:58:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:50.628 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:50.628 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:50.628 Found net devices under 0000:09:00.0: cvl_0_0 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:50.628 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:50.629 Found net devices under 0000:09:00.1: cvl_0_1 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:50.629 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:50.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:50.886 00:21:50.886 --- 10.0.0.2 ping statistics --- 00:21:50.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.886 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:50.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:21:50.886 00:21:50.886 --- 10.0.0.1 ping statistics --- 00:21:50.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.886 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.886 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=78666 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 78666 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 78666 ']' 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.887 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:50.887 [2024-07-12 15:58:20.525330] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:50.887 [2024-07-12 15:58:20.525407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.887 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.887 [2024-07-12 15:58:20.588500] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.144 [2024-07-12 15:58:20.693981] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.144 [2024-07-12 15:58:20.694033] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.144 [2024-07-12 15:58:20.694057] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.144 [2024-07-12 15:58:20.694066] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.144 [2024-07-12 15:58:20.694076] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.144 [2024-07-12 15:58:20.694154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.144 [2024-07-12 15:58:20.694218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.144 [2024-07-12 15:58:20.694294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.144 [2024-07-12 15:58:20.694296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.144 [2024-07-12 15:58:20.838971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.144 Malloc0 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.144 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.401 [2024-07-12 15:58:20.890102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.401 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.401 [ 00:21:51.401 { 00:21:51.401 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:51.401 "subtype": "Discovery", 00:21:51.401 "listen_addresses": [], 00:21:51.401 "allow_any_host": true, 00:21:51.401 "hosts": [] 00:21:51.401 }, 00:21:51.402 { 00:21:51.402 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.402 "subtype": "NVMe", 00:21:51.402 "listen_addresses": [ 00:21:51.402 { 00:21:51.402 "trtype": "TCP", 00:21:51.402 "adrfam": "IPv4", 00:21:51.402 "traddr": "10.0.0.2", 00:21:51.402 "trsvcid": "4420" 00:21:51.402 } 00:21:51.402 ], 00:21:51.402 "allow_any_host": true, 00:21:51.402 "hosts": [], 00:21:51.402 "serial_number": "SPDK00000000000001", 00:21:51.402 "model_number": "SPDK bdev Controller", 00:21:51.402 "max_namespaces": 2, 00:21:51.402 "min_cntlid": 1, 00:21:51.402 "max_cntlid": 65519, 00:21:51.402 "namespaces": [ 00:21:51.402 { 00:21:51.402 "nsid": 1, 00:21:51.402 "bdev_name": "Malloc0", 00:21:51.402 "name": "Malloc0", 00:21:51.402 "nguid": "7180F56DBD8A40D0979FCED78067B1DE", 00:21:51.402 "uuid": "7180f56d-bd8a-40d0-979f-ced78067b1de" 00:21:51.402 } 00:21:51.402 ] 00:21:51.402 } 00:21:51.402 ] 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=78764 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:51.402 15:58:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:51.402 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.402 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.659 Malloc1 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.659 [ 00:21:51.659 { 00:21:51.659 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:51.659 "subtype": "Discovery", 00:21:51.659 "listen_addresses": [], 00:21:51.659 "allow_any_host": true, 00:21:51.659 "hosts": [] 00:21:51.659 }, 00:21:51.659 { 00:21:51.659 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.659 "subtype": "NVMe", 00:21:51.659 "listen_addresses": [ 00:21:51.659 { 00:21:51.659 "trtype": "TCP", 00:21:51.659 "adrfam": "IPv4", 00:21:51.659 "traddr": "10.0.0.2", 00:21:51.659 "trsvcid": "4420" 00:21:51.659 } 00:21:51.659 ], 00:21:51.659 "allow_any_host": true, 00:21:51.659 "hosts": [], 00:21:51.659 "serial_number": "SPDK00000000000001", 00:21:51.659 "model_number": "SPDK bdev Controller", 00:21:51.659 "max_namespaces": 2, 00:21:51.659 "min_cntlid": 1, 00:21:51.659 "max_cntlid": 65519, 00:21:51.659 "namespaces": [ 00:21:51.659 { 00:21:51.659 "nsid": 1, 00:21:51.659 "bdev_name": "Malloc0", 00:21:51.659 "name": "Malloc0", 00:21:51.659 "nguid": "7180F56DBD8A40D0979FCED78067B1DE", 00:21:51.659 "uuid": "7180f56d-bd8a-40d0-979f-ced78067b1de" 00:21:51.659 }, 00:21:51.659 { 00:21:51.659 "nsid": 2, 00:21:51.659 "bdev_name": "Malloc1", 00:21:51.659 "name": "Malloc1", 00:21:51.659 "nguid": "7F3F85E653C8455A8C7C7CD7C5F99A3A", 00:21:51.659 "uuid": "7f3f85e6-53c8-455a-8c7c-7cd7c5f99a3a" 00:21:51.659 } 00:21:51.659 ] 00:21:51.659 } 00:21:51.659 ] 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 78764 00:21:51.659 Asynchronous Event Request test 00:21:51.659 Attaching to 10.0.0.2 00:21:51.659 Attached to 10.0.0.2 00:21:51.659 Registering asynchronous event callbacks... 00:21:51.659 Starting namespace attribute notice tests for all controllers... 00:21:51.659 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:51.659 aer_cb - Changed Namespace 00:21:51.659 Cleaning up... 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:51.659 rmmod nvme_tcp 00:21:51.659 rmmod nvme_fabrics 00:21:51.659 rmmod nvme_keyring 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 78666 ']' 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 78666 00:21:51.659 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 78666 ']' 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 78666 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78666 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78666' 00:21:51.660 killing process with pid 78666 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 78666 00:21:51.660 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 78666 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:51.917 15:58:21 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.466 15:58:23 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.466 00:21:54.466 real 0m5.537s 00:21:54.466 user 0m4.320s 00:21:54.466 sys 0m1.930s 00:21:54.466 15:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:54.466 15:58:23 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:54.466 ************************************ 00:21:54.466 END TEST nvmf_aer 00:21:54.466 ************************************ 00:21:54.466 15:58:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:54.466 15:58:23 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:54.466 15:58:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:54.466 15:58:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:54.466 15:58:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:54.466 ************************************ 00:21:54.466 START TEST nvmf_async_init 00:21:54.466 ************************************ 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:54.466 * Looking for test storage... 00:21:54.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ba690676feb547939a1d12a646e88bf6 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.466 15:58:23 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:56.367 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:56.367 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.367 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:56.368 Found net devices under 0000:09:00.0: cvl_0_0 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:56.368 Found net devices under 0000:09:00.1: cvl_0_1 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:21:56.368 00:21:56.368 --- 10.0.0.2 ping statistics --- 00:21:56.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.368 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:21:56.368 00:21:56.368 --- 10.0.0.1 ping statistics --- 00:21:56.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.368 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=80708 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 80708 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 80708 ']' 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.368 15:58:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.368 [2024-07-12 15:58:26.007309] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:21:56.368 [2024-07-12 15:58:26.007392] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.368 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.368 [2024-07-12 15:58:26.068495] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.626 [2024-07-12 15:58:26.177641] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.626 [2024-07-12 15:58:26.177690] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.626 [2024-07-12 15:58:26.177703] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.626 [2024-07-12 15:58:26.177714] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.626 [2024-07-12 15:58:26.177724] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.626 [2024-07-12 15:58:26.177754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.626 [2024-07-12 15:58:26.326978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.626 null0 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.626 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ba690676feb547939a1d12a646e88bf6 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.883 [2024-07-12 15:58:26.367219] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.883 nvme0n1 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.883 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:56.883 [ 00:21:56.883 { 00:21:56.883 "name": "nvme0n1", 00:21:56.883 "aliases": [ 00:21:56.883 "ba690676-feb5-4793-9a1d-12a646e88bf6" 00:21:56.883 ], 00:21:56.883 "product_name": "NVMe disk", 00:21:56.883 "block_size": 512, 00:21:56.883 "num_blocks": 2097152, 00:21:56.883 "uuid": "ba690676-feb5-4793-9a1d-12a646e88bf6", 00:21:56.883 "assigned_rate_limits": { 00:21:56.884 "rw_ios_per_sec": 0, 00:21:56.884 "rw_mbytes_per_sec": 0, 00:21:56.884 "r_mbytes_per_sec": 0, 00:21:56.884 "w_mbytes_per_sec": 0 00:21:56.884 }, 00:21:56.884 "claimed": false, 00:21:56.884 "zoned": false, 00:21:56.884 "supported_io_types": { 00:21:56.884 "read": true, 00:21:56.884 "write": true, 00:21:56.884 "unmap": false, 00:21:56.884 "flush": true, 00:21:56.884 "reset": true, 00:21:56.884 "nvme_admin": true, 00:21:56.884 "nvme_io": true, 00:21:56.884 "nvme_io_md": false, 00:21:56.884 "write_zeroes": true, 00:21:56.884 "zcopy": false, 00:21:56.884 "get_zone_info": false, 00:21:56.884 "zone_management": false, 00:21:56.884 "zone_append": false, 00:21:56.884 "compare": true, 00:21:56.884 "compare_and_write": true, 00:21:56.884 "abort": true, 00:21:56.884 "seek_hole": false, 00:21:56.884 "seek_data": false, 00:21:56.884 "copy": true, 00:21:56.884 "nvme_iov_md": false 00:21:56.884 }, 00:21:56.884 "memory_domains": [ 00:21:56.884 { 00:21:56.884 "dma_device_id": "system", 00:21:56.884 "dma_device_type": 1 00:21:56.884 } 00:21:56.884 ], 00:21:56.884 "driver_specific": { 00:21:56.884 "nvme": [ 00:21:56.884 { 00:21:56.884 "trid": { 00:21:56.884 "trtype": "TCP", 00:21:56.884 "adrfam": "IPv4", 00:21:57.141 "traddr": "10.0.0.2", 00:21:57.141 "trsvcid": "4420", 00:21:57.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.141 }, 00:21:57.141 "ctrlr_data": { 00:21:57.141 "cntlid": 1, 00:21:57.141 "vendor_id": "0x8086", 00:21:57.141 "model_number": "SPDK bdev Controller", 00:21:57.141 "serial_number": "00000000000000000000", 00:21:57.141 "firmware_revision": "24.09", 00:21:57.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.141 "oacs": { 00:21:57.141 "security": 0, 00:21:57.141 "format": 0, 00:21:57.141 "firmware": 0, 00:21:57.141 "ns_manage": 0 00:21:57.141 }, 00:21:57.141 "multi_ctrlr": true, 00:21:57.141 "ana_reporting": false 00:21:57.141 }, 00:21:57.141 "vs": { 00:21:57.141 "nvme_version": "1.3" 00:21:57.141 }, 00:21:57.141 "ns_data": { 00:21:57.141 "id": 1, 00:21:57.141 "can_share": true 00:21:57.141 } 00:21:57.141 } 00:21:57.141 ], 00:21:57.141 "mp_policy": "active_passive" 00:21:57.141 } 00:21:57.141 } 00:21:57.141 ] 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.141 [2024-07-12 15:58:26.620038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:57.141 [2024-07-12 15:58:26.620108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b12b0 (9): Bad file descriptor 00:21:57.141 [2024-07-12 15:58:26.752455] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.141 [ 00:21:57.141 { 00:21:57.141 "name": "nvme0n1", 00:21:57.141 "aliases": [ 00:21:57.141 "ba690676-feb5-4793-9a1d-12a646e88bf6" 00:21:57.141 ], 00:21:57.141 "product_name": "NVMe disk", 00:21:57.141 "block_size": 512, 00:21:57.141 "num_blocks": 2097152, 00:21:57.141 "uuid": "ba690676-feb5-4793-9a1d-12a646e88bf6", 00:21:57.141 "assigned_rate_limits": { 00:21:57.141 "rw_ios_per_sec": 0, 00:21:57.141 "rw_mbytes_per_sec": 0, 00:21:57.141 "r_mbytes_per_sec": 0, 00:21:57.141 "w_mbytes_per_sec": 0 00:21:57.141 }, 00:21:57.141 "claimed": false, 00:21:57.141 "zoned": false, 00:21:57.141 "supported_io_types": { 00:21:57.141 "read": true, 00:21:57.141 "write": true, 00:21:57.141 "unmap": false, 00:21:57.141 "flush": true, 00:21:57.141 "reset": true, 00:21:57.141 "nvme_admin": true, 00:21:57.141 "nvme_io": true, 00:21:57.141 "nvme_io_md": false, 00:21:57.141 "write_zeroes": true, 00:21:57.141 "zcopy": false, 00:21:57.141 "get_zone_info": false, 00:21:57.141 "zone_management": false, 00:21:57.141 "zone_append": false, 00:21:57.141 "compare": true, 00:21:57.141 "compare_and_write": true, 00:21:57.141 "abort": true, 00:21:57.141 "seek_hole": false, 00:21:57.141 "seek_data": false, 00:21:57.141 "copy": true, 00:21:57.141 "nvme_iov_md": false 00:21:57.141 }, 00:21:57.141 "memory_domains": [ 00:21:57.141 { 00:21:57.141 "dma_device_id": "system", 00:21:57.141 "dma_device_type": 1 00:21:57.141 } 00:21:57.141 ], 00:21:57.141 "driver_specific": { 00:21:57.141 "nvme": [ 00:21:57.141 { 00:21:57.141 "trid": { 00:21:57.141 "trtype": "TCP", 00:21:57.141 "adrfam": "IPv4", 00:21:57.141 "traddr": "10.0.0.2", 00:21:57.141 "trsvcid": "4420", 00:21:57.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.141 }, 00:21:57.141 "ctrlr_data": { 00:21:57.141 "cntlid": 2, 00:21:57.141 "vendor_id": "0x8086", 00:21:57.141 "model_number": "SPDK bdev Controller", 00:21:57.141 "serial_number": "00000000000000000000", 00:21:57.141 "firmware_revision": "24.09", 00:21:57.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.141 "oacs": { 00:21:57.141 "security": 0, 00:21:57.141 "format": 0, 00:21:57.141 "firmware": 0, 00:21:57.141 "ns_manage": 0 00:21:57.141 }, 00:21:57.141 "multi_ctrlr": true, 00:21:57.141 "ana_reporting": false 00:21:57.141 }, 00:21:57.141 "vs": { 00:21:57.141 "nvme_version": "1.3" 00:21:57.141 }, 00:21:57.141 "ns_data": { 00:21:57.141 "id": 1, 00:21:57.141 "can_share": true 00:21:57.141 } 00:21:57.141 } 00:21:57.141 ], 00:21:57.141 "mp_policy": "active_passive" 00:21:57.141 } 00:21:57.141 } 00:21:57.141 ] 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.H48OwgvY6J 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.H48OwgvY6J 00:21:57.141 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.142 [2024-07-12 15:58:26.796739] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.142 [2024-07-12 15:58:26.796838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H48OwgvY6J 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.142 [2024-07-12 15:58:26.804747] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.H48OwgvY6J 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.142 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.142 [2024-07-12 15:58:26.812777] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.142 [2024-07-12 15:58:26.812824] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:57.399 nvme0n1 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.399 [ 00:21:57.399 { 00:21:57.399 "name": "nvme0n1", 00:21:57.399 "aliases": [ 00:21:57.399 "ba690676-feb5-4793-9a1d-12a646e88bf6" 00:21:57.399 ], 00:21:57.399 "product_name": "NVMe disk", 00:21:57.399 "block_size": 512, 00:21:57.399 "num_blocks": 2097152, 00:21:57.399 "uuid": "ba690676-feb5-4793-9a1d-12a646e88bf6", 00:21:57.399 "assigned_rate_limits": { 00:21:57.399 "rw_ios_per_sec": 0, 00:21:57.399 "rw_mbytes_per_sec": 0, 00:21:57.399 "r_mbytes_per_sec": 0, 00:21:57.399 "w_mbytes_per_sec": 0 00:21:57.399 }, 00:21:57.399 "claimed": false, 00:21:57.399 "zoned": false, 00:21:57.399 "supported_io_types": { 00:21:57.399 "read": true, 00:21:57.399 "write": true, 00:21:57.399 "unmap": false, 00:21:57.399 "flush": true, 00:21:57.399 "reset": true, 00:21:57.399 "nvme_admin": true, 00:21:57.399 "nvme_io": true, 00:21:57.399 "nvme_io_md": false, 00:21:57.399 "write_zeroes": true, 00:21:57.399 "zcopy": false, 00:21:57.399 "get_zone_info": false, 00:21:57.399 "zone_management": false, 00:21:57.399 "zone_append": false, 00:21:57.399 "compare": true, 00:21:57.399 "compare_and_write": true, 00:21:57.399 "abort": true, 00:21:57.399 "seek_hole": false, 00:21:57.399 "seek_data": false, 00:21:57.399 "copy": true, 00:21:57.399 "nvme_iov_md": false 00:21:57.399 }, 00:21:57.399 "memory_domains": [ 00:21:57.399 { 00:21:57.399 "dma_device_id": "system", 00:21:57.399 "dma_device_type": 1 00:21:57.399 } 00:21:57.399 ], 00:21:57.399 "driver_specific": { 00:21:57.399 "nvme": [ 00:21:57.399 { 00:21:57.399 "trid": { 00:21:57.399 "trtype": "TCP", 00:21:57.399 "adrfam": "IPv4", 00:21:57.399 "traddr": "10.0.0.2", 00:21:57.399 "trsvcid": "4421", 00:21:57.399 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:57.399 }, 00:21:57.399 "ctrlr_data": { 00:21:57.399 "cntlid": 3, 00:21:57.399 "vendor_id": "0x8086", 00:21:57.399 "model_number": "SPDK bdev Controller", 00:21:57.399 "serial_number": "00000000000000000000", 00:21:57.399 "firmware_revision": "24.09", 00:21:57.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:57.399 "oacs": { 00:21:57.399 "security": 0, 00:21:57.399 "format": 0, 00:21:57.399 "firmware": 0, 00:21:57.399 "ns_manage": 0 00:21:57.399 }, 00:21:57.399 "multi_ctrlr": true, 00:21:57.399 "ana_reporting": false 00:21:57.399 }, 00:21:57.399 "vs": { 00:21:57.399 "nvme_version": "1.3" 00:21:57.399 }, 00:21:57.399 "ns_data": { 00:21:57.399 "id": 1, 00:21:57.399 "can_share": true 00:21:57.399 } 00:21:57.399 } 00:21:57.399 ], 00:21:57.399 "mp_policy": "active_passive" 00:21:57.399 } 00:21:57.399 } 00:21:57.399 ] 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.H48OwgvY6J 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:57.399 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.400 rmmod nvme_tcp 00:21:57.400 rmmod nvme_fabrics 00:21:57.400 rmmod nvme_keyring 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 80708 ']' 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 80708 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 80708 ']' 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 80708 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80708 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80708' 00:21:57.400 killing process with pid 80708 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 80708 00:21:57.400 [2024-07-12 15:58:26.988680] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:57.400 [2024-07-12 15:58:26.988709] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:57.400 15:58:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 80708 00:21:57.656 15:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:57.656 15:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:57.656 15:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:57.657 15:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:57.657 15:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:57.657 15:58:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.657 15:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.657 15:58:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.555 15:58:29 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:59.555 00:21:59.555 real 0m5.557s 00:21:59.555 user 0m2.081s 00:21:59.555 sys 0m1.864s 00:21:59.555 15:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.555 15:58:29 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:59.555 ************************************ 00:21:59.555 END TEST nvmf_async_init 00:21:59.555 ************************************ 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:59.813 15:58:29 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.813 ************************************ 00:21:59.813 START TEST dma 00:21:59.813 ************************************ 00:21:59.813 15:58:29 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:59.813 * Looking for test storage... 00:21:59.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.813 15:58:29 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.813 15:58:29 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.813 15:58:29 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.813 15:58:29 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.813 15:58:29 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:59.813 15:58:29 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.813 15:58:29 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.813 15:58:29 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:59.813 15:58:29 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:59.813 00:21:59.813 real 0m0.055s 00:21:59.813 user 0m0.028s 00:21:59.813 sys 0m0.032s 00:21:59.813 15:58:29 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.813 15:58:29 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:59.813 ************************************ 00:21:59.813 END TEST dma 00:21:59.813 ************************************ 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:59.813 15:58:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.813 15:58:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:59.813 ************************************ 00:21:59.813 START TEST nvmf_identify 00:21:59.813 ************************************ 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:59.813 * Looking for test storage... 00:21:59.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.813 15:58:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.710 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:01.711 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:01.711 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:01.711 Found net devices under 0000:09:00.0: cvl_0_0 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:01.711 Found net devices under 0000:09:00.1: cvl_0_1 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.711 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.968 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.968 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.968 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:01.968 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.968 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.968 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.968 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:01.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:01.969 00:22:01.969 --- 10.0.0.2 ping statistics --- 00:22:01.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.969 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:22:01.969 00:22:01.969 --- 10.0.0.1 ping statistics --- 00:22:01.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.969 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=82833 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 82833 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 82833 ']' 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.969 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.969 [2024-07-12 15:58:31.640326] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:22:01.969 [2024-07-12 15:58:31.640429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.969 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.229 [2024-07-12 15:58:31.705167] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.229 [2024-07-12 15:58:31.807980] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.230 [2024-07-12 15:58:31.808031] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.230 [2024-07-12 15:58:31.808053] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.230 [2024-07-12 15:58:31.808063] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.230 [2024-07-12 15:58:31.808073] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.230 [2024-07-12 15:58:31.808159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.230 [2024-07-12 15:58:31.808223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.230 [2024-07-12 15:58:31.808297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.230 [2024-07-12 15:58:31.808293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.230 [2024-07-12 15:58:31.939976] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:02.230 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.488 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:02.488 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.488 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.488 Malloc0 00:22:02.488 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.489 15:58:31 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:02.489 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.489 15:58:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.489 [2024-07-12 15:58:32.017947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.489 [ 00:22:02.489 { 00:22:02.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:02.489 "subtype": "Discovery", 00:22:02.489 "listen_addresses": [ 00:22:02.489 { 00:22:02.489 "trtype": "TCP", 00:22:02.489 "adrfam": "IPv4", 00:22:02.489 "traddr": "10.0.0.2", 00:22:02.489 "trsvcid": "4420" 00:22:02.489 } 00:22:02.489 ], 00:22:02.489 "allow_any_host": true, 00:22:02.489 "hosts": [] 00:22:02.489 }, 00:22:02.489 { 00:22:02.489 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.489 "subtype": "NVMe", 00:22:02.489 "listen_addresses": [ 00:22:02.489 { 00:22:02.489 "trtype": "TCP", 00:22:02.489 "adrfam": "IPv4", 00:22:02.489 "traddr": "10.0.0.2", 00:22:02.489 "trsvcid": "4420" 00:22:02.489 } 00:22:02.489 ], 00:22:02.489 "allow_any_host": true, 00:22:02.489 "hosts": [], 00:22:02.489 "serial_number": "SPDK00000000000001", 00:22:02.489 "model_number": "SPDK bdev Controller", 00:22:02.489 "max_namespaces": 32, 00:22:02.489 "min_cntlid": 1, 00:22:02.489 "max_cntlid": 65519, 00:22:02.489 "namespaces": [ 00:22:02.489 { 00:22:02.489 "nsid": 1, 00:22:02.489 "bdev_name": "Malloc0", 00:22:02.489 "name": "Malloc0", 00:22:02.489 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:02.489 "eui64": "ABCDEF0123456789", 00:22:02.489 "uuid": "38a37814-bcc6-4333-b96f-b6d386010c13" 00:22:02.489 } 00:22:02.489 ] 00:22:02.489 } 00:22:02.489 ] 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.489 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:02.489 [2024-07-12 15:58:32.060761] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:22:02.489 [2024-07-12 15:58:32.060806] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82971 ] 00:22:02.489 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.489 [2024-07-12 15:58:32.095641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:02.489 [2024-07-12 15:58:32.095699] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:02.489 [2024-07-12 15:58:32.095709] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:02.489 [2024-07-12 15:58:32.095723] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:02.489 [2024-07-12 15:58:32.095733] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:02.489 [2024-07-12 15:58:32.099800] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:02.489 [2024-07-12 15:58:32.099866] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15146e0 0 00:22:02.489 [2024-07-12 15:58:32.107326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:02.489 [2024-07-12 15:58:32.107352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:02.489 [2024-07-12 15:58:32.107361] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:02.489 [2024-07-12 15:58:32.107368] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:02.489 [2024-07-12 15:58:32.107424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.107437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.107444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.489 [2024-07-12 15:58:32.107461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:02.489 [2024-07-12 15:58:32.107488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.489 [2024-07-12 15:58:32.115327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.489 [2024-07-12 15:58:32.115345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.489 [2024-07-12 15:58:32.115352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.489 [2024-07-12 15:58:32.115380] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:02.489 [2024-07-12 15:58:32.115392] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:02.489 [2024-07-12 15:58:32.115401] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:02.489 [2024-07-12 15:58:32.115423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.489 [2024-07-12 15:58:32.115450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.489 [2024-07-12 15:58:32.115473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.489 [2024-07-12 15:58:32.115652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.489 [2024-07-12 15:58:32.115667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.489 [2024-07-12 15:58:32.115674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.489 [2024-07-12 15:58:32.115690] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:02.489 [2024-07-12 15:58:32.115703] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:02.489 [2024-07-12 15:58:32.115715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.489 [2024-07-12 15:58:32.115740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.489 [2024-07-12 15:58:32.115761] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.489 [2024-07-12 15:58:32.115880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.489 [2024-07-12 15:58:32.115891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.489 [2024-07-12 15:58:32.115898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.489 [2024-07-12 15:58:32.115913] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:02.489 [2024-07-12 15:58:32.115926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:02.489 [2024-07-12 15:58:32.115938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.115952] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.489 [2024-07-12 15:58:32.115962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.489 [2024-07-12 15:58:32.115983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.489 [2024-07-12 15:58:32.116155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.489 [2024-07-12 15:58:32.116170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.489 [2024-07-12 15:58:32.116176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.116183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.489 [2024-07-12 15:58:32.116192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:02.489 [2024-07-12 15:58:32.116209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.116218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.116224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.489 [2024-07-12 15:58:32.116235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.489 [2024-07-12 15:58:32.116256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.489 [2024-07-12 15:58:32.116375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.489 [2024-07-12 15:58:32.116389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.489 [2024-07-12 15:58:32.116400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.489 [2024-07-12 15:58:32.116407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.489 [2024-07-12 15:58:32.116415] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:02.489 [2024-07-12 15:58:32.116423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:02.489 [2024-07-12 15:58:32.116437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:02.489 [2024-07-12 15:58:32.116547] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:02.490 [2024-07-12 15:58:32.116555] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:02.490 [2024-07-12 15:58:32.116569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.116576] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.116582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.116593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.490 [2024-07-12 15:58:32.116614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.490 [2024-07-12 15:58:32.116786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.490 [2024-07-12 15:58:32.116801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.490 [2024-07-12 15:58:32.116807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.116814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.490 [2024-07-12 15:58:32.116822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:02.490 [2024-07-12 15:58:32.116839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.116848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.116854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.116865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.490 [2024-07-12 15:58:32.116886] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.490 [2024-07-12 15:58:32.117056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.490 [2024-07-12 15:58:32.117071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.490 [2024-07-12 15:58:32.117078] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.117084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.490 [2024-07-12 15:58:32.117092] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:02.490 [2024-07-12 15:58:32.117100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:02.490 [2024-07-12 15:58:32.117114] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:02.490 [2024-07-12 15:58:32.117133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:02.490 [2024-07-12 15:58:32.117149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.117157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.117171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.490 [2024-07-12 15:58:32.117193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.490 [2024-07-12 15:58:32.117386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.490 [2024-07-12 15:58:32.117402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.490 [2024-07-12 15:58:32.117408] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.117415] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15146e0): datao=0, datal=4096, cccid=0 00:22:02.490 [2024-07-12 15:58:32.117423] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1574540) on tqpair(0x15146e0): expected_datao=0, payload_size=4096 00:22:02.490 [2024-07-12 15:58:32.117430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.117462] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.117471] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.490 [2024-07-12 15:58:32.158491] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.490 [2024-07-12 15:58:32.158499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.490 [2024-07-12 15:58:32.158518] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:02.490 [2024-07-12 15:58:32.158526] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:02.490 [2024-07-12 15:58:32.158534] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:02.490 [2024-07-12 15:58:32.158542] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:02.490 [2024-07-12 15:58:32.158550] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:02.490 [2024-07-12 15:58:32.158558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:02.490 [2024-07-12 15:58:32.158573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:02.490 [2024-07-12 15:58:32.158590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.158617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:02.490 [2024-07-12 15:58:32.158640] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.490 [2024-07-12 15:58:32.158814] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.490 [2024-07-12 15:58:32.158829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.490 [2024-07-12 15:58:32.158836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.490 [2024-07-12 15:58:32.158854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.158882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.490 [2024-07-12 15:58:32.158893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.158915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.490 [2024-07-12 15:58:32.158925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.158946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.490 [2024-07-12 15:58:32.158956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.158969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.158993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.490 [2024-07-12 15:58:32.159002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:02.490 [2024-07-12 15:58:32.159021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:02.490 [2024-07-12 15:58:32.159033] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.159040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.159051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.490 [2024-07-12 15:58:32.159073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574540, cid 0, qid 0 00:22:02.490 [2024-07-12 15:58:32.159098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15746c0, cid 1, qid 0 00:22:02.490 [2024-07-12 15:58:32.159106] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574840, cid 2, qid 0 00:22:02.490 [2024-07-12 15:58:32.159113] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.490 [2024-07-12 15:58:32.159121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574b40, cid 4, qid 0 00:22:02.490 [2024-07-12 15:58:32.159303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.490 [2024-07-12 15:58:32.163327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.490 [2024-07-12 15:58:32.163339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.163346] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574b40) on tqpair=0x15146e0 00:22:02.490 [2024-07-12 15:58:32.163355] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:02.490 [2024-07-12 15:58:32.163364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:02.490 [2024-07-12 15:58:32.163397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.163407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15146e0) 00:22:02.490 [2024-07-12 15:58:32.163418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.490 [2024-07-12 15:58:32.163444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574b40, cid 4, qid 0 00:22:02.490 [2024-07-12 15:58:32.163647] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.490 [2024-07-12 15:58:32.163659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.490 [2024-07-12 15:58:32.163666] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.163672] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15146e0): datao=0, datal=4096, cccid=4 00:22:02.490 [2024-07-12 15:58:32.163680] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1574b40) on tqpair(0x15146e0): expected_datao=0, payload_size=4096 00:22:02.490 [2024-07-12 15:58:32.163687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.163697] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.163704] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.163768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.490 [2024-07-12 15:58:32.163783] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.490 [2024-07-12 15:58:32.163789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.490 [2024-07-12 15:58:32.163796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574b40) on tqpair=0x15146e0 00:22:02.490 [2024-07-12 15:58:32.163814] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:02.490 [2024-07-12 15:58:32.163851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.163862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15146e0) 00:22:02.491 [2024-07-12 15:58:32.163873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.491 [2024-07-12 15:58:32.163885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.163892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.163898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15146e0) 00:22:02.491 [2024-07-12 15:58:32.163907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.491 [2024-07-12 15:58:32.163933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574b40, cid 4, qid 0 00:22:02.491 [2024-07-12 15:58:32.163945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574cc0, cid 5, qid 0 00:22:02.491 [2024-07-12 15:58:32.164167] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.491 [2024-07-12 15:58:32.164182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.491 [2024-07-12 15:58:32.164189] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.164195] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15146e0): datao=0, datal=1024, cccid=4 00:22:02.491 [2024-07-12 15:58:32.164202] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1574b40) on tqpair(0x15146e0): expected_datao=0, payload_size=1024 00:22:02.491 [2024-07-12 15:58:32.164210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.164219] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.164226] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.164235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.491 [2024-07-12 15:58:32.164244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.491 [2024-07-12 15:58:32.164250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.164257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574cc0) on tqpair=0x15146e0 00:22:02.491 [2024-07-12 15:58:32.204460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.491 [2024-07-12 15:58:32.204478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.491 [2024-07-12 15:58:32.204490] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574b40) on tqpair=0x15146e0 00:22:02.491 [2024-07-12 15:58:32.204522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15146e0) 00:22:02.491 [2024-07-12 15:58:32.204544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.491 [2024-07-12 15:58:32.204574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574b40, cid 4, qid 0 00:22:02.491 [2024-07-12 15:58:32.204720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.491 [2024-07-12 15:58:32.204736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.491 [2024-07-12 15:58:32.204743] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204749] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15146e0): datao=0, datal=3072, cccid=4 00:22:02.491 [2024-07-12 15:58:32.204757] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1574b40) on tqpair(0x15146e0): expected_datao=0, payload_size=3072 00:22:02.491 [2024-07-12 15:58:32.204764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204774] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204781] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.491 [2024-07-12 15:58:32.204861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.491 [2024-07-12 15:58:32.204867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574b40) on tqpair=0x15146e0 00:22:02.491 [2024-07-12 15:58:32.204889] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.204897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15146e0) 00:22:02.491 [2024-07-12 15:58:32.204908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.491 [2024-07-12 15:58:32.204935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1574b40, cid 4, qid 0 00:22:02.491 [2024-07-12 15:58:32.205077] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.491 [2024-07-12 15:58:32.205092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.491 [2024-07-12 15:58:32.205098] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.205105] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15146e0): datao=0, datal=8, cccid=4 00:22:02.491 [2024-07-12 15:58:32.205112] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1574b40) on tqpair(0x15146e0): expected_datao=0, payload_size=8 00:22:02.491 [2024-07-12 15:58:32.205119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.205129] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.491 [2024-07-12 15:58:32.205136] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.752 [2024-07-12 15:58:32.245462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.752 [2024-07-12 15:58:32.245483] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.752 [2024-07-12 15:58:32.245491] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.752 [2024-07-12 15:58:32.245498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574b40) on tqpair=0x15146e0 00:22:02.752 ===================================================== 00:22:02.752 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:02.752 ===================================================== 00:22:02.752 Controller Capabilities/Features 00:22:02.752 ================================ 00:22:02.752 Vendor ID: 0000 00:22:02.752 Subsystem Vendor ID: 0000 00:22:02.752 Serial Number: .................... 00:22:02.752 Model Number: ........................................ 00:22:02.752 Firmware Version: 24.09 00:22:02.752 Recommended Arb Burst: 0 00:22:02.752 IEEE OUI Identifier: 00 00 00 00:22:02.752 Multi-path I/O 00:22:02.752 May have multiple subsystem ports: No 00:22:02.752 May have multiple controllers: No 00:22:02.752 Associated with SR-IOV VF: No 00:22:02.752 Max Data Transfer Size: 131072 00:22:02.752 Max Number of Namespaces: 0 00:22:02.752 Max Number of I/O Queues: 1024 00:22:02.752 NVMe Specification Version (VS): 1.3 00:22:02.752 NVMe Specification Version (Identify): 1.3 00:22:02.752 Maximum Queue Entries: 128 00:22:02.752 Contiguous Queues Required: Yes 00:22:02.752 Arbitration Mechanisms Supported 00:22:02.752 Weighted Round Robin: Not Supported 00:22:02.752 Vendor Specific: Not Supported 00:22:02.752 Reset Timeout: 15000 ms 00:22:02.752 Doorbell Stride: 4 bytes 00:22:02.752 NVM Subsystem Reset: Not Supported 00:22:02.752 Command Sets Supported 00:22:02.752 NVM Command Set: Supported 00:22:02.752 Boot Partition: Not Supported 00:22:02.752 Memory Page Size Minimum: 4096 bytes 00:22:02.752 Memory Page Size Maximum: 4096 bytes 00:22:02.752 Persistent Memory Region: Not Supported 00:22:02.752 Optional Asynchronous Events Supported 00:22:02.752 Namespace Attribute Notices: Not Supported 00:22:02.752 Firmware Activation Notices: Not Supported 00:22:02.752 ANA Change Notices: Not Supported 00:22:02.752 PLE Aggregate Log Change Notices: Not Supported 00:22:02.752 LBA Status Info Alert Notices: Not Supported 00:22:02.752 EGE Aggregate Log Change Notices: Not Supported 00:22:02.752 Normal NVM Subsystem Shutdown event: Not Supported 00:22:02.752 Zone Descriptor Change Notices: Not Supported 00:22:02.753 Discovery Log Change Notices: Supported 00:22:02.753 Controller Attributes 00:22:02.753 128-bit Host Identifier: Not Supported 00:22:02.753 Non-Operational Permissive Mode: Not Supported 00:22:02.753 NVM Sets: Not Supported 00:22:02.753 Read Recovery Levels: Not Supported 00:22:02.753 Endurance Groups: Not Supported 00:22:02.753 Predictable Latency Mode: Not Supported 00:22:02.753 Traffic Based Keep ALive: Not Supported 00:22:02.753 Namespace Granularity: Not Supported 00:22:02.753 SQ Associations: Not Supported 00:22:02.753 UUID List: Not Supported 00:22:02.753 Multi-Domain Subsystem: Not Supported 00:22:02.753 Fixed Capacity Management: Not Supported 00:22:02.753 Variable Capacity Management: Not Supported 00:22:02.753 Delete Endurance Group: Not Supported 00:22:02.753 Delete NVM Set: Not Supported 00:22:02.753 Extended LBA Formats Supported: Not Supported 00:22:02.753 Flexible Data Placement Supported: Not Supported 00:22:02.753 00:22:02.753 Controller Memory Buffer Support 00:22:02.753 ================================ 00:22:02.753 Supported: No 00:22:02.753 00:22:02.753 Persistent Memory Region Support 00:22:02.753 ================================ 00:22:02.753 Supported: No 00:22:02.753 00:22:02.753 Admin Command Set Attributes 00:22:02.753 ============================ 00:22:02.753 Security Send/Receive: Not Supported 00:22:02.753 Format NVM: Not Supported 00:22:02.753 Firmware Activate/Download: Not Supported 00:22:02.753 Namespace Management: Not Supported 00:22:02.753 Device Self-Test: Not Supported 00:22:02.753 Directives: Not Supported 00:22:02.753 NVMe-MI: Not Supported 00:22:02.753 Virtualization Management: Not Supported 00:22:02.753 Doorbell Buffer Config: Not Supported 00:22:02.753 Get LBA Status Capability: Not Supported 00:22:02.753 Command & Feature Lockdown Capability: Not Supported 00:22:02.753 Abort Command Limit: 1 00:22:02.753 Async Event Request Limit: 4 00:22:02.753 Number of Firmware Slots: N/A 00:22:02.753 Firmware Slot 1 Read-Only: N/A 00:22:02.753 Firmware Activation Without Reset: N/A 00:22:02.753 Multiple Update Detection Support: N/A 00:22:02.753 Firmware Update Granularity: No Information Provided 00:22:02.753 Per-Namespace SMART Log: No 00:22:02.753 Asymmetric Namespace Access Log Page: Not Supported 00:22:02.753 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:02.753 Command Effects Log Page: Not Supported 00:22:02.753 Get Log Page Extended Data: Supported 00:22:02.753 Telemetry Log Pages: Not Supported 00:22:02.753 Persistent Event Log Pages: Not Supported 00:22:02.753 Supported Log Pages Log Page: May Support 00:22:02.753 Commands Supported & Effects Log Page: Not Supported 00:22:02.753 Feature Identifiers & Effects Log Page:May Support 00:22:02.753 NVMe-MI Commands & Effects Log Page: May Support 00:22:02.753 Data Area 4 for Telemetry Log: Not Supported 00:22:02.753 Error Log Page Entries Supported: 128 00:22:02.753 Keep Alive: Not Supported 00:22:02.753 00:22:02.753 NVM Command Set Attributes 00:22:02.753 ========================== 00:22:02.753 Submission Queue Entry Size 00:22:02.753 Max: 1 00:22:02.753 Min: 1 00:22:02.753 Completion Queue Entry Size 00:22:02.753 Max: 1 00:22:02.753 Min: 1 00:22:02.753 Number of Namespaces: 0 00:22:02.753 Compare Command: Not Supported 00:22:02.753 Write Uncorrectable Command: Not Supported 00:22:02.753 Dataset Management Command: Not Supported 00:22:02.753 Write Zeroes Command: Not Supported 00:22:02.753 Set Features Save Field: Not Supported 00:22:02.753 Reservations: Not Supported 00:22:02.753 Timestamp: Not Supported 00:22:02.753 Copy: Not Supported 00:22:02.753 Volatile Write Cache: Not Present 00:22:02.753 Atomic Write Unit (Normal): 1 00:22:02.753 Atomic Write Unit (PFail): 1 00:22:02.753 Atomic Compare & Write Unit: 1 00:22:02.753 Fused Compare & Write: Supported 00:22:02.753 Scatter-Gather List 00:22:02.753 SGL Command Set: Supported 00:22:02.753 SGL Keyed: Supported 00:22:02.753 SGL Bit Bucket Descriptor: Not Supported 00:22:02.753 SGL Metadata Pointer: Not Supported 00:22:02.753 Oversized SGL: Not Supported 00:22:02.753 SGL Metadata Address: Not Supported 00:22:02.753 SGL Offset: Supported 00:22:02.753 Transport SGL Data Block: Not Supported 00:22:02.753 Replay Protected Memory Block: Not Supported 00:22:02.753 00:22:02.753 Firmware Slot Information 00:22:02.753 ========================= 00:22:02.753 Active slot: 0 00:22:02.753 00:22:02.753 00:22:02.753 Error Log 00:22:02.753 ========= 00:22:02.753 00:22:02.753 Active Namespaces 00:22:02.753 ================= 00:22:02.753 Discovery Log Page 00:22:02.753 ================== 00:22:02.753 Generation Counter: 2 00:22:02.753 Number of Records: 2 00:22:02.753 Record Format: 0 00:22:02.753 00:22:02.753 Discovery Log Entry 0 00:22:02.753 ---------------------- 00:22:02.753 Transport Type: 3 (TCP) 00:22:02.753 Address Family: 1 (IPv4) 00:22:02.753 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:02.753 Entry Flags: 00:22:02.753 Duplicate Returned Information: 1 00:22:02.753 Explicit Persistent Connection Support for Discovery: 1 00:22:02.753 Transport Requirements: 00:22:02.753 Secure Channel: Not Required 00:22:02.753 Port ID: 0 (0x0000) 00:22:02.753 Controller ID: 65535 (0xffff) 00:22:02.753 Admin Max SQ Size: 128 00:22:02.753 Transport Service Identifier: 4420 00:22:02.753 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:02.753 Transport Address: 10.0.0.2 00:22:02.753 Discovery Log Entry 1 00:22:02.753 ---------------------- 00:22:02.753 Transport Type: 3 (TCP) 00:22:02.753 Address Family: 1 (IPv4) 00:22:02.753 Subsystem Type: 2 (NVM Subsystem) 00:22:02.753 Entry Flags: 00:22:02.753 Duplicate Returned Information: 0 00:22:02.753 Explicit Persistent Connection Support for Discovery: 0 00:22:02.753 Transport Requirements: 00:22:02.753 Secure Channel: Not Required 00:22:02.753 Port ID: 0 (0x0000) 00:22:02.753 Controller ID: 65535 (0xffff) 00:22:02.753 Admin Max SQ Size: 128 00:22:02.753 Transport Service Identifier: 4420 00:22:02.753 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:02.753 Transport Address: 10.0.0.2 [2024-07-12 15:58:32.245617] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:02.753 [2024-07-12 15:58:32.245638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574540) on tqpair=0x15146e0 00:22:02.753 [2024-07-12 15:58:32.245652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.753 [2024-07-12 15:58:32.245661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15746c0) on tqpair=0x15146e0 00:22:02.753 [2024-07-12 15:58:32.245669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.753 [2024-07-12 15:58:32.245677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1574840) on tqpair=0x15146e0 00:22:02.753 [2024-07-12 15:58:32.245684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.753 [2024-07-12 15:58:32.245692] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.753 [2024-07-12 15:58:32.245700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.753 [2024-07-12 15:58:32.245713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.245721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.245728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.753 [2024-07-12 15:58:32.245753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.753 [2024-07-12 15:58:32.245778] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.753 [2024-07-12 15:58:32.245952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.753 [2024-07-12 15:58:32.245968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.753 [2024-07-12 15:58:32.245975] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.245981] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.753 [2024-07-12 15:58:32.245993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.246001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.246007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.753 [2024-07-12 15:58:32.246018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.753 [2024-07-12 15:58:32.246044] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.753 [2024-07-12 15:58:32.246189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.753 [2024-07-12 15:58:32.246204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.753 [2024-07-12 15:58:32.246211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.246218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.753 [2024-07-12 15:58:32.246226] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:02.753 [2024-07-12 15:58:32.246233] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:02.753 [2024-07-12 15:58:32.246250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.246259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.753 [2024-07-12 15:58:32.246265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.753 [2024-07-12 15:58:32.246276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.246296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.754 [2024-07-12 15:58:32.246467] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.246482] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.246493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.754 [2024-07-12 15:58:32.246517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.754 [2024-07-12 15:58:32.246543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.246564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.754 [2024-07-12 15:58:32.246684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.246699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.246706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.754 [2024-07-12 15:58:32.246729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246738] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.754 [2024-07-12 15:58:32.246755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.246775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.754 [2024-07-12 15:58:32.246895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.246910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.246917] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.754 [2024-07-12 15:58:32.246940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.246955] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.754 [2024-07-12 15:58:32.246965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.246986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.754 [2024-07-12 15:58:32.247111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.247122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.247129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.247136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.754 [2024-07-12 15:58:32.247151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.247160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.247167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.754 [2024-07-12 15:58:32.247177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.247197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.754 [2024-07-12 15:58:32.251323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.251340] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.251347] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.251354] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.754 [2024-07-12 15:58:32.251390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.251401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.251408] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15146e0) 00:22:02.754 [2024-07-12 15:58:32.251418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.251441] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15749c0, cid 3, qid 0 00:22:02.754 [2024-07-12 15:58:32.251599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.251615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.251622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.251628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15749c0) on tqpair=0x15146e0 00:22:02.754 [2024-07-12 15:58:32.251641] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:02.754 00:22:02.754 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:02.754 [2024-07-12 15:58:32.288066] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:22:02.754 [2024-07-12 15:58:32.288109] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82973 ] 00:22:02.754 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.754 [2024-07-12 15:58:32.323078] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:02.754 [2024-07-12 15:58:32.323126] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:02.754 [2024-07-12 15:58:32.323135] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:02.754 [2024-07-12 15:58:32.323148] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:02.754 [2024-07-12 15:58:32.323158] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:02.754 [2024-07-12 15:58:32.323672] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:02.754 [2024-07-12 15:58:32.323725] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a546e0 0 00:22:02.754 [2024-07-12 15:58:32.338327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:02.754 [2024-07-12 15:58:32.338350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:02.754 [2024-07-12 15:58:32.338359] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:02.754 [2024-07-12 15:58:32.338365] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:02.754 [2024-07-12 15:58:32.338411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.338423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.338430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.754 [2024-07-12 15:58:32.338444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:02.754 [2024-07-12 15:58:32.338486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.754 [2024-07-12 15:58:32.344327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.344350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.344358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.754 [2024-07-12 15:58:32.344385] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:02.754 [2024-07-12 15:58:32.344397] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:02.754 [2024-07-12 15:58:32.344406] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:02.754 [2024-07-12 15:58:32.344424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344433] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.754 [2024-07-12 15:58:32.344452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.344476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.754 [2024-07-12 15:58:32.344609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.344625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.344631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.754 [2024-07-12 15:58:32.344646] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:02.754 [2024-07-12 15:58:32.344660] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:02.754 [2024-07-12 15:58:32.344673] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.754 [2024-07-12 15:58:32.344698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.344720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.754 [2024-07-12 15:58:32.344853] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.344865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.344871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.754 [2024-07-12 15:58:32.344886] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:02.754 [2024-07-12 15:58:32.344900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:02.754 [2024-07-12 15:58:32.344913] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.344926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.754 [2024-07-12 15:58:32.344937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.754 [2024-07-12 15:58:32.344959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.754 [2024-07-12 15:58:32.345078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.754 [2024-07-12 15:58:32.345093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.754 [2024-07-12 15:58:32.345104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.754 [2024-07-12 15:58:32.345111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.755 [2024-07-12 15:58:32.345120] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:02.755 [2024-07-12 15:58:32.345137] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.345164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.755 [2024-07-12 15:58:32.345186] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.755 [2024-07-12 15:58:32.345304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.755 [2024-07-12 15:58:32.345354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.755 [2024-07-12 15:58:32.345365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.755 [2024-07-12 15:58:32.345379] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:02.755 [2024-07-12 15:58:32.345388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:02.755 [2024-07-12 15:58:32.345402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:02.755 [2024-07-12 15:58:32.345513] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:02.755 [2024-07-12 15:58:32.345523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:02.755 [2024-07-12 15:58:32.345536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.345561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.755 [2024-07-12 15:58:32.345584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.755 [2024-07-12 15:58:32.345707] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.755 [2024-07-12 15:58:32.345722] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.755 [2024-07-12 15:58:32.345729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.755 [2024-07-12 15:58:32.345744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:02.755 [2024-07-12 15:58:32.345761] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.345788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.755 [2024-07-12 15:58:32.345809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.755 [2024-07-12 15:58:32.345925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.755 [2024-07-12 15:58:32.345942] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.755 [2024-07-12 15:58:32.345949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.345956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.755 [2024-07-12 15:58:32.345964] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:02.755 [2024-07-12 15:58:32.345972] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.345986] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:02.755 [2024-07-12 15:58:32.346004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.346017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346025] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.346036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.755 [2024-07-12 15:58:32.346058] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.755 [2024-07-12 15:58:32.346221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.755 [2024-07-12 15:58:32.346234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.755 [2024-07-12 15:58:32.346240] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346246] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=4096, cccid=0 00:22:02.755 [2024-07-12 15:58:32.346254] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4540) on tqpair(0x1a546e0): expected_datao=0, payload_size=4096 00:22:02.755 [2024-07-12 15:58:32.346261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346272] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346279] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.755 [2024-07-12 15:58:32.346324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.755 [2024-07-12 15:58:32.346332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.755 [2024-07-12 15:58:32.346350] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:02.755 [2024-07-12 15:58:32.346359] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:02.755 [2024-07-12 15:58:32.346366] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:02.755 [2024-07-12 15:58:32.346373] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:02.755 [2024-07-12 15:58:32.346381] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:02.755 [2024-07-12 15:58:32.346389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.346403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.346419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.346448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:02.755 [2024-07-12 15:58:32.346472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.755 [2024-07-12 15:58:32.346611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.755 [2024-07-12 15:58:32.346626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.755 [2024-07-12 15:58:32.346633] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.755 [2024-07-12 15:58:32.346650] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.346674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.755 [2024-07-12 15:58:32.346684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.346706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.755 [2024-07-12 15:58:32.346716] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346722] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.346738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.755 [2024-07-12 15:58:32.346747] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.346769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.755 [2024-07-12 15:58:32.346778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.346797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.346810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.346817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a546e0) 00:22:02.755 [2024-07-12 15:58:32.346828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.755 [2024-07-12 15:58:32.346866] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4540, cid 0, qid 0 00:22:02.755 [2024-07-12 15:58:32.346877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab46c0, cid 1, qid 0 00:22:02.755 [2024-07-12 15:58:32.346885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4840, cid 2, qid 0 00:22:02.755 [2024-07-12 15:58:32.346892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.755 [2024-07-12 15:58:32.346915] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4b40, cid 4, qid 0 00:22:02.755 [2024-07-12 15:58:32.347065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.755 [2024-07-12 15:58:32.347084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.755 [2024-07-12 15:58:32.347091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.755 [2024-07-12 15:58:32.347098] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4b40) on tqpair=0x1a546e0 00:22:02.755 [2024-07-12 15:58:32.347106] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:02.755 [2024-07-12 15:58:32.347115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.347134] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.347146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:02.755 [2024-07-12 15:58:32.347157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347165] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.347182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:02.756 [2024-07-12 15:58:32.347203] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4b40, cid 4, qid 0 00:22:02.756 [2024-07-12 15:58:32.347328] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.756 [2024-07-12 15:58:32.347342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.756 [2024-07-12 15:58:32.347348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4b40) on tqpair=0x1a546e0 00:22:02.756 [2024-07-12 15:58:32.347423] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.347443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.347458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.347477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.756 [2024-07-12 15:58:32.347498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4b40, cid 4, qid 0 00:22:02.756 [2024-07-12 15:58:32.347639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.756 [2024-07-12 15:58:32.347651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.756 [2024-07-12 15:58:32.347658] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347664] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=4096, cccid=4 00:22:02.756 [2024-07-12 15:58:32.347672] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4b40) on tqpair(0x1a546e0): expected_datao=0, payload_size=4096 00:22:02.756 [2024-07-12 15:58:32.347679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347689] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347697] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.756 [2024-07-12 15:58:32.347734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.756 [2024-07-12 15:58:32.347740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4b40) on tqpair=0x1a546e0 00:22:02.756 [2024-07-12 15:58:32.347765] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:02.756 [2024-07-12 15:58:32.347785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.347802] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.347816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.347824] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.347835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.756 [2024-07-12 15:58:32.347857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4b40, cid 4, qid 0 00:22:02.756 [2024-07-12 15:58:32.348005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.756 [2024-07-12 15:58:32.348020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.756 [2024-07-12 15:58:32.348027] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348033] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=4096, cccid=4 00:22:02.756 [2024-07-12 15:58:32.348041] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4b40) on tqpair(0x1a546e0): expected_datao=0, payload_size=4096 00:22:02.756 [2024-07-12 15:58:32.348048] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348069] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348079] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.756 [2024-07-12 15:58:32.348176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.756 [2024-07-12 15:58:32.348183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4b40) on tqpair=0x1a546e0 00:22:02.756 [2024-07-12 15:58:32.348210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.348262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.756 [2024-07-12 15:58:32.348284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4b40, cid 4, qid 0 00:22:02.756 [2024-07-12 15:58:32.348424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.756 [2024-07-12 15:58:32.348438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.756 [2024-07-12 15:58:32.348445] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348451] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=4096, cccid=4 00:22:02.756 [2024-07-12 15:58:32.348459] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4b40) on tqpair(0x1a546e0): expected_datao=0, payload_size=4096 00:22:02.756 [2024-07-12 15:58:32.348466] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348476] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348483] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.756 [2024-07-12 15:58:32.348519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.756 [2024-07-12 15:58:32.348527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4b40) on tqpair=0x1a546e0 00:22:02.756 [2024-07-12 15:58:32.348546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348595] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348612] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:02.756 [2024-07-12 15:58:32.348619] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:02.756 [2024-07-12 15:58:32.348628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:02.756 [2024-07-12 15:58:32.348647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348656] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.348667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.756 [2024-07-12 15:58:32.348679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.348702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:02.756 [2024-07-12 15:58:32.348743] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4b40, cid 4, qid 0 00:22:02.756 [2024-07-12 15:58:32.348755] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4cc0, cid 5, qid 0 00:22:02.756 [2024-07-12 15:58:32.348908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.756 [2024-07-12 15:58:32.348921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.756 [2024-07-12 15:58:32.348928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4b40) on tqpair=0x1a546e0 00:22:02.756 [2024-07-12 15:58:32.348946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.756 [2024-07-12 15:58:32.348955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.756 [2024-07-12 15:58:32.348961] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348968] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4cc0) on tqpair=0x1a546e0 00:22:02.756 [2024-07-12 15:58:32.348984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.348993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.349004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.756 [2024-07-12 15:58:32.349029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4cc0, cid 5, qid 0 00:22:02.756 [2024-07-12 15:58:32.349159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.756 [2024-07-12 15:58:32.349174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.756 [2024-07-12 15:58:32.349181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.349188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4cc0) on tqpair=0x1a546e0 00:22:02.756 [2024-07-12 15:58:32.349204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.756 [2024-07-12 15:58:32.349213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a546e0) 00:22:02.756 [2024-07-12 15:58:32.349224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.756 [2024-07-12 15:58:32.349245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4cc0, cid 5, qid 0 00:22:02.756 [2024-07-12 15:58:32.353340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.757 [2024-07-12 15:58:32.353358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.757 [2024-07-12 15:58:32.353365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4cc0) on tqpair=0x1a546e0 00:22:02.757 [2024-07-12 15:58:32.353390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353399] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a546e0) 00:22:02.757 [2024-07-12 15:58:32.353410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.757 [2024-07-12 15:58:32.353433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4cc0, cid 5, qid 0 00:22:02.757 [2024-07-12 15:58:32.353551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.757 [2024-07-12 15:58:32.353563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.757 [2024-07-12 15:58:32.353570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353577] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4cc0) on tqpair=0x1a546e0 00:22:02.757 [2024-07-12 15:58:32.353601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a546e0) 00:22:02.757 [2024-07-12 15:58:32.353623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.757 [2024-07-12 15:58:32.353636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a546e0) 00:22:02.757 [2024-07-12 15:58:32.353654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.757 [2024-07-12 15:58:32.353666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a546e0) 00:22:02.757 [2024-07-12 15:58:32.353683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.757 [2024-07-12 15:58:32.353695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353703] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a546e0) 00:22:02.757 [2024-07-12 15:58:32.353713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.757 [2024-07-12 15:58:32.353739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4cc0, cid 5, qid 0 00:22:02.757 [2024-07-12 15:58:32.353750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4b40, cid 4, qid 0 00:22:02.757 [2024-07-12 15:58:32.353758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4e40, cid 6, qid 0 00:22:02.757 [2024-07-12 15:58:32.353766] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4fc0, cid 7, qid 0 00:22:02.757 [2024-07-12 15:58:32.353964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.757 [2024-07-12 15:58:32.353979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.757 [2024-07-12 15:58:32.353986] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.353992] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=8192, cccid=5 00:22:02.757 [2024-07-12 15:58:32.354000] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4cc0) on tqpair(0x1a546e0): expected_datao=0, payload_size=8192 00:22:02.757 [2024-07-12 15:58:32.354007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354110] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354120] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.757 [2024-07-12 15:58:32.354138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.757 [2024-07-12 15:58:32.354145] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354151] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=512, cccid=4 00:22:02.757 [2024-07-12 15:58:32.354159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4b40) on tqpair(0x1a546e0): expected_datao=0, payload_size=512 00:22:02.757 [2024-07-12 15:58:32.354166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354175] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354182] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.757 [2024-07-12 15:58:32.354200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.757 [2024-07-12 15:58:32.354207] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354213] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=512, cccid=6 00:22:02.757 [2024-07-12 15:58:32.354220] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4e40) on tqpair(0x1a546e0): expected_datao=0, payload_size=512 00:22:02.757 [2024-07-12 15:58:32.354227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354237] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354244] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354253] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:02.757 [2024-07-12 15:58:32.354262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:02.757 [2024-07-12 15:58:32.354268] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354275] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a546e0): datao=0, datal=4096, cccid=7 00:22:02.757 [2024-07-12 15:58:32.354282] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ab4fc0) on tqpair(0x1a546e0): expected_datao=0, payload_size=4096 00:22:02.757 [2024-07-12 15:58:32.354289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354299] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354306] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.757 [2024-07-12 15:58:32.354338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.757 [2024-07-12 15:58:32.354349] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4cc0) on tqpair=0x1a546e0 00:22:02.757 [2024-07-12 15:58:32.354375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.757 [2024-07-12 15:58:32.354386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.757 [2024-07-12 15:58:32.354392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4b40) on tqpair=0x1a546e0 00:22:02.757 [2024-07-12 15:58:32.354415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.757 [2024-07-12 15:58:32.354425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.757 [2024-07-12 15:58:32.354431] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354438] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4e40) on tqpair=0x1a546e0 00:22:02.757 [2024-07-12 15:58:32.354449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.757 [2024-07-12 15:58:32.354458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.757 [2024-07-12 15:58:32.354464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.757 [2024-07-12 15:58:32.354471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4fc0) on tqpair=0x1a546e0 00:22:02.757 ===================================================== 00:22:02.757 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.757 ===================================================== 00:22:02.757 Controller Capabilities/Features 00:22:02.757 ================================ 00:22:02.757 Vendor ID: 8086 00:22:02.757 Subsystem Vendor ID: 8086 00:22:02.757 Serial Number: SPDK00000000000001 00:22:02.757 Model Number: SPDK bdev Controller 00:22:02.757 Firmware Version: 24.09 00:22:02.757 Recommended Arb Burst: 6 00:22:02.757 IEEE OUI Identifier: e4 d2 5c 00:22:02.757 Multi-path I/O 00:22:02.757 May have multiple subsystem ports: Yes 00:22:02.757 May have multiple controllers: Yes 00:22:02.757 Associated with SR-IOV VF: No 00:22:02.757 Max Data Transfer Size: 131072 00:22:02.757 Max Number of Namespaces: 32 00:22:02.757 Max Number of I/O Queues: 127 00:22:02.757 NVMe Specification Version (VS): 1.3 00:22:02.757 NVMe Specification Version (Identify): 1.3 00:22:02.757 Maximum Queue Entries: 128 00:22:02.757 Contiguous Queues Required: Yes 00:22:02.757 Arbitration Mechanisms Supported 00:22:02.757 Weighted Round Robin: Not Supported 00:22:02.757 Vendor Specific: Not Supported 00:22:02.757 Reset Timeout: 15000 ms 00:22:02.757 Doorbell Stride: 4 bytes 00:22:02.757 NVM Subsystem Reset: Not Supported 00:22:02.757 Command Sets Supported 00:22:02.757 NVM Command Set: Supported 00:22:02.757 Boot Partition: Not Supported 00:22:02.757 Memory Page Size Minimum: 4096 bytes 00:22:02.757 Memory Page Size Maximum: 4096 bytes 00:22:02.757 Persistent Memory Region: Not Supported 00:22:02.757 Optional Asynchronous Events Supported 00:22:02.757 Namespace Attribute Notices: Supported 00:22:02.757 Firmware Activation Notices: Not Supported 00:22:02.757 ANA Change Notices: Not Supported 00:22:02.757 PLE Aggregate Log Change Notices: Not Supported 00:22:02.757 LBA Status Info Alert Notices: Not Supported 00:22:02.757 EGE Aggregate Log Change Notices: Not Supported 00:22:02.757 Normal NVM Subsystem Shutdown event: Not Supported 00:22:02.757 Zone Descriptor Change Notices: Not Supported 00:22:02.757 Discovery Log Change Notices: Not Supported 00:22:02.758 Controller Attributes 00:22:02.758 128-bit Host Identifier: Supported 00:22:02.758 Non-Operational Permissive Mode: Not Supported 00:22:02.758 NVM Sets: Not Supported 00:22:02.758 Read Recovery Levels: Not Supported 00:22:02.758 Endurance Groups: Not Supported 00:22:02.758 Predictable Latency Mode: Not Supported 00:22:02.758 Traffic Based Keep ALive: Not Supported 00:22:02.758 Namespace Granularity: Not Supported 00:22:02.758 SQ Associations: Not Supported 00:22:02.758 UUID List: Not Supported 00:22:02.758 Multi-Domain Subsystem: Not Supported 00:22:02.758 Fixed Capacity Management: Not Supported 00:22:02.758 Variable Capacity Management: Not Supported 00:22:02.758 Delete Endurance Group: Not Supported 00:22:02.758 Delete NVM Set: Not Supported 00:22:02.758 Extended LBA Formats Supported: Not Supported 00:22:02.758 Flexible Data Placement Supported: Not Supported 00:22:02.758 00:22:02.758 Controller Memory Buffer Support 00:22:02.758 ================================ 00:22:02.758 Supported: No 00:22:02.758 00:22:02.758 Persistent Memory Region Support 00:22:02.758 ================================ 00:22:02.758 Supported: No 00:22:02.758 00:22:02.758 Admin Command Set Attributes 00:22:02.758 ============================ 00:22:02.758 Security Send/Receive: Not Supported 00:22:02.758 Format NVM: Not Supported 00:22:02.758 Firmware Activate/Download: Not Supported 00:22:02.758 Namespace Management: Not Supported 00:22:02.758 Device Self-Test: Not Supported 00:22:02.758 Directives: Not Supported 00:22:02.758 NVMe-MI: Not Supported 00:22:02.758 Virtualization Management: Not Supported 00:22:02.758 Doorbell Buffer Config: Not Supported 00:22:02.758 Get LBA Status Capability: Not Supported 00:22:02.758 Command & Feature Lockdown Capability: Not Supported 00:22:02.758 Abort Command Limit: 4 00:22:02.758 Async Event Request Limit: 4 00:22:02.758 Number of Firmware Slots: N/A 00:22:02.758 Firmware Slot 1 Read-Only: N/A 00:22:02.758 Firmware Activation Without Reset: N/A 00:22:02.758 Multiple Update Detection Support: N/A 00:22:02.758 Firmware Update Granularity: No Information Provided 00:22:02.758 Per-Namespace SMART Log: No 00:22:02.758 Asymmetric Namespace Access Log Page: Not Supported 00:22:02.758 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:02.758 Command Effects Log Page: Supported 00:22:02.758 Get Log Page Extended Data: Supported 00:22:02.758 Telemetry Log Pages: Not Supported 00:22:02.758 Persistent Event Log Pages: Not Supported 00:22:02.758 Supported Log Pages Log Page: May Support 00:22:02.758 Commands Supported & Effects Log Page: Not Supported 00:22:02.758 Feature Identifiers & Effects Log Page:May Support 00:22:02.758 NVMe-MI Commands & Effects Log Page: May Support 00:22:02.758 Data Area 4 for Telemetry Log: Not Supported 00:22:02.758 Error Log Page Entries Supported: 128 00:22:02.758 Keep Alive: Supported 00:22:02.758 Keep Alive Granularity: 10000 ms 00:22:02.758 00:22:02.758 NVM Command Set Attributes 00:22:02.758 ========================== 00:22:02.758 Submission Queue Entry Size 00:22:02.758 Max: 64 00:22:02.758 Min: 64 00:22:02.758 Completion Queue Entry Size 00:22:02.758 Max: 16 00:22:02.758 Min: 16 00:22:02.758 Number of Namespaces: 32 00:22:02.758 Compare Command: Supported 00:22:02.758 Write Uncorrectable Command: Not Supported 00:22:02.758 Dataset Management Command: Supported 00:22:02.758 Write Zeroes Command: Supported 00:22:02.758 Set Features Save Field: Not Supported 00:22:02.758 Reservations: Supported 00:22:02.758 Timestamp: Not Supported 00:22:02.758 Copy: Supported 00:22:02.758 Volatile Write Cache: Present 00:22:02.758 Atomic Write Unit (Normal): 1 00:22:02.758 Atomic Write Unit (PFail): 1 00:22:02.758 Atomic Compare & Write Unit: 1 00:22:02.758 Fused Compare & Write: Supported 00:22:02.758 Scatter-Gather List 00:22:02.758 SGL Command Set: Supported 00:22:02.758 SGL Keyed: Supported 00:22:02.758 SGL Bit Bucket Descriptor: Not Supported 00:22:02.758 SGL Metadata Pointer: Not Supported 00:22:02.758 Oversized SGL: Not Supported 00:22:02.758 SGL Metadata Address: Not Supported 00:22:02.758 SGL Offset: Supported 00:22:02.758 Transport SGL Data Block: Not Supported 00:22:02.758 Replay Protected Memory Block: Not Supported 00:22:02.758 00:22:02.758 Firmware Slot Information 00:22:02.758 ========================= 00:22:02.758 Active slot: 1 00:22:02.758 Slot 1 Firmware Revision: 24.09 00:22:02.758 00:22:02.758 00:22:02.758 Commands Supported and Effects 00:22:02.758 ============================== 00:22:02.758 Admin Commands 00:22:02.758 -------------- 00:22:02.758 Get Log Page (02h): Supported 00:22:02.758 Identify (06h): Supported 00:22:02.758 Abort (08h): Supported 00:22:02.758 Set Features (09h): Supported 00:22:02.758 Get Features (0Ah): Supported 00:22:02.758 Asynchronous Event Request (0Ch): Supported 00:22:02.758 Keep Alive (18h): Supported 00:22:02.758 I/O Commands 00:22:02.758 ------------ 00:22:02.758 Flush (00h): Supported LBA-Change 00:22:02.758 Write (01h): Supported LBA-Change 00:22:02.758 Read (02h): Supported 00:22:02.758 Compare (05h): Supported 00:22:02.758 Write Zeroes (08h): Supported LBA-Change 00:22:02.758 Dataset Management (09h): Supported LBA-Change 00:22:02.758 Copy (19h): Supported LBA-Change 00:22:02.758 00:22:02.758 Error Log 00:22:02.758 ========= 00:22:02.758 00:22:02.758 Arbitration 00:22:02.758 =========== 00:22:02.758 Arbitration Burst: 1 00:22:02.758 00:22:02.758 Power Management 00:22:02.758 ================ 00:22:02.758 Number of Power States: 1 00:22:02.758 Current Power State: Power State #0 00:22:02.758 Power State #0: 00:22:02.758 Max Power: 0.00 W 00:22:02.758 Non-Operational State: Operational 00:22:02.758 Entry Latency: Not Reported 00:22:02.758 Exit Latency: Not Reported 00:22:02.758 Relative Read Throughput: 0 00:22:02.758 Relative Read Latency: 0 00:22:02.758 Relative Write Throughput: 0 00:22:02.758 Relative Write Latency: 0 00:22:02.758 Idle Power: Not Reported 00:22:02.758 Active Power: Not Reported 00:22:02.758 Non-Operational Permissive Mode: Not Supported 00:22:02.758 00:22:02.758 Health Information 00:22:02.758 ================== 00:22:02.758 Critical Warnings: 00:22:02.758 Available Spare Space: OK 00:22:02.758 Temperature: OK 00:22:02.758 Device Reliability: OK 00:22:02.758 Read Only: No 00:22:02.758 Volatile Memory Backup: OK 00:22:02.758 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:02.758 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:02.758 Available Spare: 0% 00:22:02.758 Available Spare Threshold: 0% 00:22:02.758 Life Percentage Used:[2024-07-12 15:58:32.354631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.758 [2024-07-12 15:58:32.354644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a546e0) 00:22:02.758 [2024-07-12 15:58:32.354655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.758 [2024-07-12 15:58:32.354677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab4fc0, cid 7, qid 0 00:22:02.758 [2024-07-12 15:58:32.354833] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.758 [2024-07-12 15:58:32.354848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.758 [2024-07-12 15:58:32.354855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.758 [2024-07-12 15:58:32.354862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4fc0) on tqpair=0x1a546e0 00:22:02.758 [2024-07-12 15:58:32.354908] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:02.758 [2024-07-12 15:58:32.354928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4540) on tqpair=0x1a546e0 00:22:02.758 [2024-07-12 15:58:32.354938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.758 [2024-07-12 15:58:32.354947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab46c0) on tqpair=0x1a546e0 00:22:02.758 [2024-07-12 15:58:32.354955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.758 [2024-07-12 15:58:32.354963] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab4840) on tqpair=0x1a546e0 00:22:02.758 [2024-07-12 15:58:32.354971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.758 [2024-07-12 15:58:32.354979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.758 [2024-07-12 15:58:32.354987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:02.758 [2024-07-12 15:58:32.354999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.758 [2024-07-12 15:58:32.355007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.758 [2024-07-12 15:58:32.355014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.758 [2024-07-12 15:58:32.355025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.758 [2024-07-12 15:58:32.355051] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.758 [2024-07-12 15:58:32.355177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.758 [2024-07-12 15:58:32.355189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.758 [2024-07-12 15:58:32.355196] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.758 [2024-07-12 15:58:32.355202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.758 [2024-07-12 15:58:32.355213] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.758 [2024-07-12 15:58:32.355221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.758 [2024-07-12 15:58:32.355228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.758 [2024-07-12 15:58:32.355239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.759 [2024-07-12 15:58:32.355265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.759 [2024-07-12 15:58:32.355410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.759 [2024-07-12 15:58:32.355426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.759 [2024-07-12 15:58:32.355432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.759 [2024-07-12 15:58:32.355447] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:02.759 [2024-07-12 15:58:32.355455] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:02.759 [2024-07-12 15:58:32.355471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355487] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.759 [2024-07-12 15:58:32.355497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.759 [2024-07-12 15:58:32.355519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.759 [2024-07-12 15:58:32.355650] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.759 [2024-07-12 15:58:32.355665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.759 [2024-07-12 15:58:32.355672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.759 [2024-07-12 15:58:32.355696] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.759 [2024-07-12 15:58:32.355723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.759 [2024-07-12 15:58:32.355744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.759 [2024-07-12 15:58:32.355860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.759 [2024-07-12 15:58:32.355872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.759 [2024-07-12 15:58:32.355879] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.759 [2024-07-12 15:58:32.355901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.355917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.759 [2024-07-12 15:58:32.355932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.759 [2024-07-12 15:58:32.355954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.759 [2024-07-12 15:58:32.356081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.759 [2024-07-12 15:58:32.356096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.759 [2024-07-12 15:58:32.356103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.356110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.759 [2024-07-12 15:58:32.356126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.356136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.356142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.759 [2024-07-12 15:58:32.356153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.759 [2024-07-12 15:58:32.356174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.759 [2024-07-12 15:58:32.356292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.759 [2024-07-12 15:58:32.356307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.759 [2024-07-12 15:58:32.356313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.360348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.759 [2024-07-12 15:58:32.360367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.360377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.360383] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a546e0) 00:22:02.759 [2024-07-12 15:58:32.360410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:02.759 [2024-07-12 15:58:32.360432] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ab49c0, cid 3, qid 0 00:22:02.759 [2024-07-12 15:58:32.360561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:02.759 [2024-07-12 15:58:32.360573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:02.759 [2024-07-12 15:58:32.360580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:02.759 [2024-07-12 15:58:32.360586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ab49c0) on tqpair=0x1a546e0 00:22:02.759 [2024-07-12 15:58:32.360599] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:02.759 0% 00:22:02.759 Data Units Read: 0 00:22:02.759 Data Units Written: 0 00:22:02.759 Host Read Commands: 0 00:22:02.759 Host Write Commands: 0 00:22:02.759 Controller Busy Time: 0 minutes 00:22:02.759 Power Cycles: 0 00:22:02.759 Power On Hours: 0 hours 00:22:02.759 Unsafe Shutdowns: 0 00:22:02.759 Unrecoverable Media Errors: 0 00:22:02.759 Lifetime Error Log Entries: 0 00:22:02.759 Warning Temperature Time: 0 minutes 00:22:02.759 Critical Temperature Time: 0 minutes 00:22:02.759 00:22:02.759 Number of Queues 00:22:02.759 ================ 00:22:02.759 Number of I/O Submission Queues: 127 00:22:02.759 Number of I/O Completion Queues: 127 00:22:02.759 00:22:02.759 Active Namespaces 00:22:02.759 ================= 00:22:02.759 Namespace ID:1 00:22:02.759 Error Recovery Timeout: Unlimited 00:22:02.759 Command Set Identifier: NVM (00h) 00:22:02.759 Deallocate: Supported 00:22:02.759 Deallocated/Unwritten Error: Not Supported 00:22:02.759 Deallocated Read Value: Unknown 00:22:02.759 Deallocate in Write Zeroes: Not Supported 00:22:02.759 Deallocated Guard Field: 0xFFFF 00:22:02.759 Flush: Supported 00:22:02.759 Reservation: Supported 00:22:02.759 Namespace Sharing Capabilities: Multiple Controllers 00:22:02.759 Size (in LBAs): 131072 (0GiB) 00:22:02.759 Capacity (in LBAs): 131072 (0GiB) 00:22:02.759 Utilization (in LBAs): 131072 (0GiB) 00:22:02.759 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:02.759 EUI64: ABCDEF0123456789 00:22:02.759 UUID: 38a37814-bcc6-4333-b96f-b6d386010c13 00:22:02.759 Thin Provisioning: Not Supported 00:22:02.759 Per-NS Atomic Units: Yes 00:22:02.759 Atomic Boundary Size (Normal): 0 00:22:02.759 Atomic Boundary Size (PFail): 0 00:22:02.759 Atomic Boundary Offset: 0 00:22:02.759 Maximum Single Source Range Length: 65535 00:22:02.759 Maximum Copy Length: 65535 00:22:02.759 Maximum Source Range Count: 1 00:22:02.759 NGUID/EUI64 Never Reused: No 00:22:02.759 Namespace Write Protected: No 00:22:02.759 Number of LBA Formats: 1 00:22:02.759 Current LBA Format: LBA Format #00 00:22:02.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:02.759 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.759 rmmod nvme_tcp 00:22:02.759 rmmod nvme_fabrics 00:22:02.759 rmmod nvme_keyring 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 82833 ']' 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 82833 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 82833 ']' 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 82833 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82833 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82833' 00:22:02.759 killing process with pid 82833 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 82833 00:22:02.759 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 82833 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.323 15:58:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.226 15:58:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.226 00:22:05.226 real 0m5.388s 00:22:05.226 user 0m4.338s 00:22:05.226 sys 0m1.837s 00:22:05.226 15:58:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.226 15:58:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:05.226 ************************************ 00:22:05.226 END TEST nvmf_identify 00:22:05.226 ************************************ 00:22:05.226 15:58:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:05.226 15:58:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:05.226 15:58:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:05.226 15:58:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.226 15:58:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:05.226 ************************************ 00:22:05.226 START TEST nvmf_perf 00:22:05.226 ************************************ 00:22:05.226 15:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:05.226 * Looking for test storage... 00:22:05.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.226 15:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.226 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:05.226 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.227 15:58:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:07.758 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:07.758 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:07.758 Found net devices under 0000:09:00.0: cvl_0_0 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:07.758 Found net devices under 0000:09:00.1: cvl_0_1 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.758 15:58:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:22:07.758 00:22:07.758 --- 10.0.0.2 ping statistics --- 00:22:07.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.758 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:22:07.758 00:22:07.758 --- 10.0.0.1 ping statistics --- 00:22:07.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.758 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:07.758 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=84905 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 84905 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 84905 ']' 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.759 [2024-07-12 15:58:37.117937] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:22:07.759 [2024-07-12 15:58:37.118018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.759 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.759 [2024-07-12 15:58:37.184907] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.759 [2024-07-12 15:58:37.295781] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.759 [2024-07-12 15:58:37.295831] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.759 [2024-07-12 15:58:37.295844] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.759 [2024-07-12 15:58:37.295855] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.759 [2024-07-12 15:58:37.295864] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.759 [2024-07-12 15:58:37.295917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.759 [2024-07-12 15:58:37.295940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.759 [2024-07-12 15:58:37.295995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.759 [2024-07-12 15:58:37.295998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:07.759 15:58:37 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:11.034 15:58:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:11.034 15:58:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:11.291 15:58:40 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:22:11.291 15:58:40 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:11.578 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:11.578 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:22:11.578 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:11.578 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:11.578 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:11.835 [2024-07-12 15:58:41.316846] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.835 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.092 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:12.092 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.349 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:12.349 15:58:41 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:12.605 15:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.862 [2024-07-12 15:58:42.360690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.862 15:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:13.120 15:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:22:13.120 15:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:13.120 15:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:13.120 15:58:42 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:22:14.489 Initializing NVMe Controllers 00:22:14.489 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:22:14.489 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:22:14.489 Initialization complete. Launching workers. 00:22:14.489 ======================================================== 00:22:14.489 Latency(us) 00:22:14.489 Device Information : IOPS MiB/s Average min max 00:22:14.489 PCIE (0000:0b:00.0) NSID 1 from core 0: 84447.83 329.87 378.26 27.22 6256.27 00:22:14.489 ======================================================== 00:22:14.489 Total : 84447.83 329.87 378.26 27.22 6256.27 00:22:14.489 00:22:14.489 15:58:43 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:14.489 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.860 Initializing NVMe Controllers 00:22:15.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.860 Initialization complete. Launching workers. 00:22:15.860 ======================================================== 00:22:15.860 Latency(us) 00:22:15.860 Device Information : IOPS MiB/s Average min max 00:22:15.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 68.00 0.27 15155.30 186.55 45376.29 00:22:15.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 46.00 0.18 21857.95 7934.41 47928.13 00:22:15.860 ======================================================== 00:22:15.860 Total : 114.00 0.45 17859.88 186.55 47928.13 00:22:15.860 00:22:15.860 15:58:45 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.860 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.793 Initializing NVMe Controllers 00:22:16.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:16.793 Initialization complete. Launching workers. 00:22:16.793 ======================================================== 00:22:16.793 Latency(us) 00:22:16.793 Device Information : IOPS MiB/s Average min max 00:22:16.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8426.98 32.92 3805.69 625.69 7449.71 00:22:16.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3826.99 14.95 8398.43 5091.84 16295.72 00:22:16.793 ======================================================== 00:22:16.793 Total : 12253.98 47.87 5240.03 625.69 16295.72 00:22:16.793 00:22:16.793 15:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:16.793 15:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:16.793 15:58:46 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:16.793 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.318 Initializing NVMe Controllers 00:22:19.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.318 Controller IO queue size 128, less than required. 00:22:19.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.318 Controller IO queue size 128, less than required. 00:22:19.318 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:19.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:19.318 Initialization complete. Launching workers. 00:22:19.318 ======================================================== 00:22:19.318 Latency(us) 00:22:19.318 Device Information : IOPS MiB/s Average min max 00:22:19.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1115.76 278.94 117371.84 71818.45 167023.05 00:22:19.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 619.37 154.84 216617.85 68650.46 342270.08 00:22:19.318 ======================================================== 00:22:19.318 Total : 1735.13 433.78 152798.46 68650.46 342270.08 00:22:19.318 00:22:19.318 15:58:48 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:19.318 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.575 No valid NVMe controllers or AIO or URING devices found 00:22:19.575 Initializing NVMe Controllers 00:22:19.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:19.575 Controller IO queue size 128, less than required. 00:22:19.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:19.575 Controller IO queue size 128, less than required. 00:22:19.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:19.575 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:19.575 WARNING: Some requested NVMe devices were skipped 00:22:19.575 15:58:49 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:19.575 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.099 Initializing NVMe Controllers 00:22:22.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.099 Controller IO queue size 128, less than required. 00:22:22.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.099 Controller IO queue size 128, less than required. 00:22:22.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:22.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:22.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:22.099 Initialization complete. Launching workers. 00:22:22.099 00:22:22.099 ==================== 00:22:22.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:22.099 TCP transport: 00:22:22.099 polls: 22356 00:22:22.099 idle_polls: 7765 00:22:22.099 sock_completions: 14591 00:22:22.099 nvme_completions: 4627 00:22:22.099 submitted_requests: 6854 00:22:22.099 queued_requests: 1 00:22:22.099 00:22:22.099 ==================== 00:22:22.099 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:22.099 TCP transport: 00:22:22.099 polls: 25096 00:22:22.099 idle_polls: 10625 00:22:22.099 sock_completions: 14471 00:22:22.099 nvme_completions: 4605 00:22:22.099 submitted_requests: 6852 00:22:22.099 queued_requests: 1 00:22:22.099 ======================================================== 00:22:22.099 Latency(us) 00:22:22.099 Device Information : IOPS MiB/s Average min max 00:22:22.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1154.71 288.68 113660.28 72480.60 211061.66 00:22:22.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1149.22 287.31 112274.51 49305.93 178008.70 00:22:22.099 ======================================================== 00:22:22.099 Total : 2303.93 575.98 112969.04 49305.93 211061.66 00:22:22.099 00:22:22.099 15:58:51 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:22.099 15:58:51 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.357 rmmod nvme_tcp 00:22:22.357 rmmod nvme_fabrics 00:22:22.357 rmmod nvme_keyring 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 84905 ']' 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 84905 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 84905 ']' 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 84905 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84905 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84905' 00:22:22.357 killing process with pid 84905 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 84905 00:22:22.357 15:58:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 84905 00:22:24.252 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:24.252 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:24.252 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:24.252 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.253 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:24.253 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.253 15:58:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.253 15:58:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.155 15:58:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.155 00:22:26.155 real 0m20.690s 00:22:26.155 user 1m2.715s 00:22:26.155 sys 0m5.394s 00:22:26.155 15:58:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.155 15:58:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:26.155 ************************************ 00:22:26.155 END TEST nvmf_perf 00:22:26.155 ************************************ 00:22:26.155 15:58:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:26.155 15:58:55 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:26.155 15:58:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:26.155 15:58:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.155 15:58:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.155 ************************************ 00:22:26.155 START TEST nvmf_fio_host 00:22:26.155 ************************************ 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:26.155 * Looking for test storage... 00:22:26.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.155 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.156 15:58:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:28.059 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:28.059 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:28.059 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:28.060 Found net devices under 0000:09:00.0: cvl_0_0 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:28.060 Found net devices under 0000:09:00.1: cvl_0_1 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.060 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:22:28.318 00:22:28.318 --- 10.0.0.2 ping statistics --- 00:22:28.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.318 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:22:28.318 00:22:28.318 --- 10.0.0.1 ping statistics --- 00:22:28.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.318 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88748 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88748 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 88748 ']' 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.318 15:58:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.318 [2024-07-12 15:58:57.977229] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:22:28.318 [2024-07-12 15:58:57.977323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.318 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.318 [2024-07-12 15:58:58.041113] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.576 [2024-07-12 15:58:58.148390] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.576 [2024-07-12 15:58:58.148441] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.576 [2024-07-12 15:58:58.148465] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.576 [2024-07-12 15:58:58.148475] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.576 [2024-07-12 15:58:58.148491] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.576 [2024-07-12 15:58:58.148551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.576 [2024-07-12 15:58:58.148617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.576 [2024-07-12 15:58:58.148682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.577 [2024-07-12 15:58:58.148684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.577 15:58:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.577 15:58:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:28.577 15:58:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:28.834 [2024-07-12 15:58:58.549996] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.119 15:58:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:29.119 15:58:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:29.119 15:58:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.119 15:58:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:29.376 Malloc1 00:22:29.376 15:58:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.634 15:58:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:29.892 15:58:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.149 [2024-07-12 15:58:59.638099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.149 15:58:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:30.407 15:58:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:30.407 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:30.407 fio-3.35 00:22:30.407 Starting 1 thread 00:22:30.664 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.192 00:22:33.192 test: (groupid=0, jobs=1): err= 0: pid=89223: Fri Jul 12 15:59:02 2024 00:22:33.192 read: IOPS=9007, BW=35.2MiB/s (36.9MB/s)(70.6MiB/2007msec) 00:22:33.192 slat (usec): min=2, max=183, avg= 2.71, stdev= 1.88 00:22:33.192 clat (usec): min=2487, max=14191, avg=7849.12, stdev=582.83 00:22:33.192 lat (usec): min=2516, max=14194, avg=7851.83, stdev=582.70 00:22:33.192 clat percentiles (usec): 00:22:33.192 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7373], 00:22:33.192 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:22:33.192 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:22:33.192 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11731], 99.95th=[12780], 00:22:33.192 | 99.99th=[13960] 00:22:33.192 bw ( KiB/s): min=35560, max=36488, per=99.95%, avg=36014.00, stdev=389.15, samples=4 00:22:33.192 iops : min= 8890, max= 9122, avg=9003.50, stdev=97.29, samples=4 00:22:33.192 write: IOPS=9026, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2007msec); 0 zone resets 00:22:33.192 slat (usec): min=2, max=139, avg= 2.87, stdev= 1.43 00:22:33.192 clat (usec): min=2044, max=12722, avg=6311.11, stdev=522.00 00:22:33.192 lat (usec): min=2053, max=12725, avg=6313.98, stdev=521.94 00:22:33.192 clat percentiles (usec): 00:22:33.192 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:22:33.192 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:22:33.192 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7046], 00:22:33.192 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[10028], 99.95th=[11731], 00:22:33.192 | 99.99th=[12649] 00:22:33.192 bw ( KiB/s): min=35760, max=36336, per=100.00%, avg=36124.00, stdev=253.11, samples=4 00:22:33.192 iops : min= 8940, max= 9084, avg=9031.00, stdev=63.28, samples=4 00:22:33.192 lat (msec) : 4=0.13%, 10=99.73%, 20=0.15% 00:22:33.192 cpu : usr=57.78%, sys=36.64%, ctx=72, majf=0, minf=33 00:22:33.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:33.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:33.192 issued rwts: total=18079,18117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.193 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:33.193 00:22:33.193 Run status group 0 (all jobs): 00:22:33.193 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.6MiB (74.1MB), run=2007-2007msec 00:22:33.193 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2007-2007msec 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:33.193 15:59:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:33.193 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:33.193 fio-3.35 00:22:33.193 Starting 1 thread 00:22:33.193 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.715 00:22:35.715 test: (groupid=0, jobs=1): err= 0: pid=89559: Fri Jul 12 15:59:05 2024 00:22:35.715 read: IOPS=8007, BW=125MiB/s (131MB/s)(251MiB/2008msec) 00:22:35.715 slat (usec): min=3, max=115, avg= 3.86, stdev= 1.80 00:22:35.715 clat (usec): min=2603, max=17881, avg=9454.34, stdev=2217.06 00:22:35.715 lat (usec): min=2606, max=17884, avg=9458.20, stdev=2217.14 00:22:35.715 clat percentiles (usec): 00:22:35.715 | 1.00th=[ 4752], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7570], 00:22:35.715 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[ 9896], 00:22:35.715 | 70.00th=[10552], 80.00th=[11207], 90.00th=[12256], 95.00th=[13042], 00:22:35.715 | 99.00th=[15664], 99.50th=[16450], 99.90th=[17695], 99.95th=[17695], 00:22:35.715 | 99.99th=[17695] 00:22:35.715 bw ( KiB/s): min=57376, max=74880, per=51.17%, avg=65560.00, stdev=7240.84, samples=4 00:22:35.715 iops : min= 3586, max= 4680, avg=4097.50, stdev=452.55, samples=4 00:22:35.715 write: IOPS=4717, BW=73.7MiB/s (77.3MB/s)(135MiB/1827msec); 0 zone resets 00:22:35.715 slat (usec): min=30, max=135, avg=33.27, stdev= 4.30 00:22:35.715 clat (usec): min=4937, max=20674, avg=11484.49, stdev=2176.13 00:22:35.715 lat (usec): min=4984, max=20708, avg=11517.77, stdev=2176.42 00:22:35.715 clat percentiles (usec): 00:22:35.715 | 1.00th=[ 7242], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9634], 00:22:35.715 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:22:35.715 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14353], 95.00th=[15270], 00:22:35.715 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:22:35.715 | 99.99th=[20579] 00:22:35.715 bw ( KiB/s): min=59520, max=78848, per=90.67%, avg=68440.00, stdev=8078.18, samples=4 00:22:35.715 iops : min= 3720, max= 4928, avg=4277.50, stdev=504.89, samples=4 00:22:35.715 lat (msec) : 4=0.21%, 10=50.16%, 20=49.62%, 50=0.01% 00:22:35.715 cpu : usr=74.39%, sys=21.87%, ctx=35, majf=0, minf=55 00:22:35.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:35.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.715 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.715 issued rwts: total=16080,8619,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.715 00:22:35.715 Run status group 0 (all jobs): 00:22:35.715 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2008-2008msec 00:22:35.715 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=135MiB (141MB), run=1827-1827msec 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.715 rmmod nvme_tcp 00:22:35.715 rmmod nvme_fabrics 00:22:35.715 rmmod nvme_keyring 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 88748 ']' 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 88748 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 88748 ']' 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 88748 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88748 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88748' 00:22:35.715 killing process with pid 88748 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 88748 00:22:35.715 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 88748 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.973 15:59:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.504 15:59:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.504 00:22:38.504 real 0m12.088s 00:22:38.504 user 0m34.640s 00:22:38.504 sys 0m4.251s 00:22:38.504 15:59:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.504 15:59:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.504 ************************************ 00:22:38.504 END TEST nvmf_fio_host 00:22:38.504 ************************************ 00:22:38.504 15:59:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:38.504 15:59:07 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:38.504 15:59:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:38.504 15:59:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.504 15:59:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.504 ************************************ 00:22:38.504 START TEST nvmf_failover 00:22:38.504 ************************************ 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:38.504 * Looking for test storage... 00:22:38.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.504 15:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.505 15:59:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.404 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:40.405 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:40.405 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:40.405 Found net devices under 0000:09:00.0: cvl_0_0 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:40.405 Found net devices under 0000:09:00.1: cvl_0_1 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:22:40.405 00:22:40.405 --- 10.0.0.2 ping statistics --- 00:22:40.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.405 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:40.405 00:22:40.405 --- 10.0.0.1 ping statistics --- 00:22:40.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.405 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=91753 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 91753 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 91753 ']' 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.405 15:59:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.405 [2024-07-12 15:59:09.985005] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:22:40.405 [2024-07-12 15:59:09.985089] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.405 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.405 [2024-07-12 15:59:10.054869] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.662 [2024-07-12 15:59:10.161328] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.662 [2024-07-12 15:59:10.161378] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.662 [2024-07-12 15:59:10.161392] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.662 [2024-07-12 15:59:10.161403] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.662 [2024-07-12 15:59:10.161413] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.662 [2024-07-12 15:59:10.161482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.662 [2024-07-12 15:59:10.161542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.662 [2024-07-12 15:59:10.161545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.662 15:59:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.662 15:59:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:40.662 15:59:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.662 15:59:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.662 15:59:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:40.662 15:59:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.662 15:59:10 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:40.919 [2024-07-12 15:59:10.524171] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.919 15:59:10 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:41.175 Malloc0 00:22:41.175 15:59:10 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:41.431 15:59:11 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:41.697 15:59:11 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:41.954 [2024-07-12 15:59:11.524179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.954 15:59:11 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:42.210 [2024-07-12 15:59:11.768971] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:42.210 15:59:11 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:42.467 [2024-07-12 15:59:12.017791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=92040 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 92040 /var/tmp/bdevperf.sock 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 92040 ']' 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:42.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:42.467 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:42.724 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.724 15:59:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:42.724 15:59:12 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.288 NVMe0n1 00:22:43.288 15:59:12 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:43.544 00:22:43.544 15:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=92176 00:22:43.544 15:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.544 15:59:13 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:44.914 15:59:14 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.914 [2024-07-12 15:59:14.510821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ea0c0 is same with the state(5) to be set 00:22:44.914 15:59:14 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:48.226 15:59:17 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.484 00:22:48.484 15:59:18 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:48.741 [2024-07-12 15:59:18.370037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.741 [2024-07-12 15:59:18.370398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 [2024-07-12 15:59:18.370671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20eb080 is same with the state(5) to be set 00:22:48.742 15:59:18 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:52.019 15:59:21 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.019 [2024-07-12 15:59:21.665735] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.019 15:59:21 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:53.391 15:59:22 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:53.392 [2024-07-12 15:59:22.962861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4e90 is same with the state(5) to be set 00:22:53.392 [2024-07-12 15:59:22.962921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4e90 is same with the state(5) to be set 00:22:53.392 [2024-07-12 15:59:22.962936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4e90 is same with the state(5) to be set 00:22:53.392 [2024-07-12 15:59:22.962948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4e90 is same with the state(5) to be set 00:22:53.392 [2024-07-12 15:59:22.962959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4e90 is same with the state(5) to be set 00:22:53.392 [2024-07-12 15:59:22.962971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4e90 is same with the state(5) to be set 00:22:53.392 [2024-07-12 15:59:22.962983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a4e90 is same with the state(5) to be set 00:22:53.392 15:59:22 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 92176 00:22:58.656 0 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 92040 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 92040 ']' 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 92040 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92040 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92040' 00:22:58.913 killing process with pid 92040 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 92040 00:22:58.913 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 92040 00:22:59.178 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.178 [2024-07-12 15:59:12.082111] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:22:59.178 [2024-07-12 15:59:12.082202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92040 ] 00:22:59.178 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.178 [2024-07-12 15:59:12.142850] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.178 [2024-07-12 15:59:12.252639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.178 Running I/O for 15 seconds... 00:22:59.178 [2024-07-12 15:59:14.512251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.178 [2024-07-12 15:59:14.512550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.178 [2024-07-12 15:59:14.512579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.178 [2024-07-12 15:59:14.512872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.178 [2024-07-12 15:59:14.512886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.512900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.512915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.512929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.512943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.512958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.512972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.512987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.179 [2024-07-12 15:59:14.513573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.179 [2024-07-12 15:59:14.513601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.179 [2024-07-12 15:59:14.513643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.179 [2024-07-12 15:59:14.513671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.179 [2024-07-12 15:59:14.513698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.179 [2024-07-12 15:59:14.513726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.179 [2024-07-12 15:59:14.513753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.513983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.513997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.514011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.514025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.514040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.514053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.514068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.514082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.514097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.514110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.514125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.179 [2024-07-12 15:59:14.514138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.179 [2024-07-12 15:59:14.514154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.180 [2024-07-12 15:59:14.514612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.514675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81632 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.514689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.514718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.514729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81640 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.514742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.514766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.514777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.514789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.514813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.514823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.514836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.514859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.514870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81664 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.514882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.514906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.514916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81672 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.514929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.514942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.514954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.514964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81680 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.514988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81688 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81696 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81704 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81712 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81720 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.180 [2024-07-12 15:59:14.515457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.180 [2024-07-12 15:59:14.515468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.180 [2024-07-12 15:59:14.515479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:22:59.180 [2024-07-12 15:59:14.515492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81784 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81824 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81832 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.515960] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.515971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.515983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.515996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81856 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81864 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81872 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81880 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81888 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516281] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81896 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516353] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81904 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81912 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:22:59.181 [2024-07-12 15:59:14.516700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.181 [2024-07-12 15:59:14.516713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.181 [2024-07-12 15:59:14.516724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.181 [2024-07-12 15:59:14.516735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81008 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.516747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.516760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.516771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.516782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81016 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.516795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.516808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.516819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.516830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81024 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.516856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.516866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.516877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81032 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.516890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.516903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.516917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.516928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81040 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.516941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.516955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.516966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.516977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.516990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.517013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.517024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81056 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.517037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.517060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.517071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81064 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.517083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.517107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.517117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81072 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.517130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.517153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.517164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.517176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.182 [2024-07-12 15:59:14.517199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.182 [2024-07-12 15:59:14.517210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81088 len:8 PRP1 0x0 PRP2 0x0 00:22:59.182 [2024-07-12 15:59:14.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517279] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa9f380 was disconnected and freed. reset controller. 00:22:59.182 [2024-07-12 15:59:14.517297] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:59.182 [2024-07-12 15:59:14.517354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:14.517374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:14.517408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:14.517435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:14.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:14.517483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.182 [2024-07-12 15:59:14.517561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa792e0 (9): Bad file descriptor 00:22:59.182 [2024-07-12 15:59:14.520802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.182 [2024-07-12 15:59:14.603037] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.182 [2024-07-12 15:59:18.369931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:18.370139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.370160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:18.370175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.370190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:18.370204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.370219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.182 [2024-07-12 15:59:18.370232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.370261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa792e0 is same with the state(5) to be set 00:22:59.182 [2024-07-12 15:59:18.371517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-07-12 15:59:18.371542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.371571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-07-12 15:59:18.371587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.371614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-07-12 15:59:18.371629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.371660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-07-12 15:59:18.371681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.371697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-07-12 15:59:18.371726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.371742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-07-12 15:59:18.371755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.182 [2024-07-12 15:59:18.371770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.182 [2024-07-12 15:59:18.371784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.371812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.371841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.371868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.371896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.371924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.371952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.371981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.371996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.183 [2024-07-12 15:59:18.372919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.183 [2024-07-12 15:59:18.372948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.183 [2024-07-12 15:59:18.372977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.372993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.183 [2024-07-12 15:59:18.373007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.373023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.183 [2024-07-12 15:59:18.373037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.183 [2024-07-12 15:59:18.373065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.183 [2024-07-12 15:59:18.373080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.183 [2024-07-12 15:59:18.373094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.373422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.373971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.373986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.184 [2024-07-12 15:59:18.374408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.184 [2024-07-12 15:59:18.374439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.184 [2024-07-12 15:59:18.374455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.374469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.374499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.374529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.374559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.374599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.374645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.374674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.374983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.374998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.375012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.185 [2024-07-12 15:59:18.375051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.185 [2024-07-12 15:59:18.375560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375601] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.185 [2024-07-12 15:59:18.375618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.185 [2024-07-12 15:59:18.375645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:22:59.185 [2024-07-12 15:59:18.375660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:18.375724] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc44080 was disconnected and freed. reset controller. 00:22:59.185 [2024-07-12 15:59:18.375743] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:59.185 [2024-07-12 15:59:18.375764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.185 [2024-07-12 15:59:18.379103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.185 [2024-07-12 15:59:18.379143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa792e0 (9): Bad file descriptor 00:22:59.185 [2024-07-12 15:59:18.450274] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.185 [2024-07-12 15:59:22.963246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.185 [2024-07-12 15:59:22.963286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:22.963304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.185 [2024-07-12 15:59:22.963327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:22.963344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.185 [2024-07-12 15:59:22.963358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:22.963372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.185 [2024-07-12 15:59:22.963385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.185 [2024-07-12 15:59:22.963399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa792e0 is same with the state(5) to be set 00:22:59.185 [2024-07-12 15:59:22.963456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.186 [2024-07-12 15:59:22.963477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.963972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.963986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.186 [2024-07-12 15:59:22.964642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.186 [2024-07-12 15:59:22.964655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.964984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.964998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.187 [2024-07-12 15:59:22.965939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.187 [2024-07-12 15:59:22.965952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.965968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.965981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.965999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.966491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.966520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.966549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.966579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.966619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.966664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.966693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.966972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.966986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.967015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.967043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.967073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.967102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.967131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:59.188 [2024-07-12 15:59:22.967160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.967193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.967222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.967251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.188 [2024-07-12 15:59:22.967266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.188 [2024-07-12 15:59:22.967280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.189 [2024-07-12 15:59:22.967295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.189 [2024-07-12 15:59:22.967309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.189 [2024-07-12 15:59:22.967357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.189 [2024-07-12 15:59:22.967373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.189 [2024-07-12 15:59:22.967388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:59.189 [2024-07-12 15:59:22.967403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.189 [2024-07-12 15:59:22.967430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:59.189 [2024-07-12 15:59:22.967445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:59.189 [2024-07-12 15:59:22.967458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:8 PRP1 0x0 PRP2 0x0 00:22:59.189 [2024-07-12 15:59:22.967471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.189 [2024-07-12 15:59:22.967533] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc43d40 was disconnected and freed. reset controller. 00:22:59.189 [2024-07-12 15:59:22.967552] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:59.189 [2024-07-12 15:59:22.967577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.189 [2024-07-12 15:59:22.970914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.189 [2024-07-12 15:59:22.970953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa792e0 (9): Bad file descriptor 00:22:59.189 [2024-07-12 15:59:23.045358] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.189 00:22:59.189 Latency(us) 00:22:59.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:59.189 Verification LBA range: start 0x0 length 0x4000 00:22:59.189 NVMe0n1 : 15.01 8355.38 32.64 557.26 0.00 14332.26 801.00 20388.98 00:22:59.189 =================================================================================================================== 00:22:59.189 Total : 8355.38 32.64 557.26 0.00 14332.26 801.00 20388.98 00:22:59.189 Received shutdown signal, test time was about 15.000000 seconds 00:22:59.189 00:22:59.189 Latency(us) 00:22:59.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.189 =================================================================================================================== 00:22:59.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=94012 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 94012 /var/tmp/bdevperf.sock 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 94012 ']' 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.189 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:59.446 15:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.446 15:59:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:59.446 15:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:59.703 [2024-07-12 15:59:29.258045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:59.704 15:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:59.960 [2024-07-12 15:59:29.494777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:59.960 15:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.217 NVMe0n1 00:23:00.217 15:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.474 00:23:00.474 15:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.039 00:23:01.039 15:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.039 15:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:01.296 15:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.552 15:59:31 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:04.826 15:59:34 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.826 15:59:34 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:04.826 15:59:34 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=94686 00:23:04.826 15:59:34 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:04.826 15:59:34 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 94686 00:23:06.241 0 00:23:06.241 15:59:35 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:06.241 [2024-07-12 15:59:28.744774] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:23:06.241 [2024-07-12 15:59:28.744876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94012 ] 00:23:06.241 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.241 [2024-07-12 15:59:28.805920] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.241 [2024-07-12 15:59:28.915139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.241 [2024-07-12 15:59:31.147394] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:06.241 [2024-07-12 15:59:31.147467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.241 [2024-07-12 15:59:31.147489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.241 [2024-07-12 15:59:31.147506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.241 [2024-07-12 15:59:31.147521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.241 [2024-07-12 15:59:31.147536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.241 [2024-07-12 15:59:31.147550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.241 [2024-07-12 15:59:31.147566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.241 [2024-07-12 15:59:31.147581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.241 [2024-07-12 15:59:31.147607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:06.241 [2024-07-12 15:59:31.147655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:06.241 [2024-07-12 15:59:31.147686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe292e0 (9): Bad file descriptor 00:23:06.241 [2024-07-12 15:59:31.153967] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:06.241 Running I/O for 1 seconds... 00:23:06.241 00:23:06.241 Latency(us) 00:23:06.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.241 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:06.241 Verification LBA range: start 0x0 length 0x4000 00:23:06.241 NVMe0n1 : 1.01 7912.15 30.91 0.00 0.00 16106.00 3373.89 14369.37 00:23:06.241 =================================================================================================================== 00:23:06.241 Total : 7912.15 30.91 0.00 0.00 16106.00 3373.89 14369.37 00:23:06.241 15:59:35 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.241 15:59:35 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:06.241 15:59:35 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.497 15:59:36 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.497 15:59:36 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:06.753 15:59:36 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:07.010 15:59:36 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 94012 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 94012 ']' 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 94012 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94012 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94012' 00:23:10.281 killing process with pid 94012 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 94012 00:23:10.281 15:59:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 94012 00:23:10.539 15:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:10.539 15:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.796 rmmod nvme_tcp 00:23:10.796 rmmod nvme_fabrics 00:23:10.796 rmmod nvme_keyring 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 91753 ']' 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 91753 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 91753 ']' 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 91753 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91753 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91753' 00:23:10.796 killing process with pid 91753 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 91753 00:23:10.796 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 91753 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.362 15:59:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.262 15:59:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.262 00:23:13.262 real 0m35.094s 00:23:13.262 user 2m2.355s 00:23:13.262 sys 0m6.389s 00:23:13.262 15:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:13.262 15:59:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.262 ************************************ 00:23:13.262 END TEST nvmf_failover 00:23:13.262 ************************************ 00:23:13.262 15:59:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:13.263 15:59:42 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:13.263 15:59:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:13.263 15:59:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.263 15:59:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:13.263 ************************************ 00:23:13.263 START TEST nvmf_host_discovery 00:23:13.263 ************************************ 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:13.263 * Looking for test storage... 00:23:13.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.263 15:59:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.791 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:15.792 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:15.792 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:15.792 Found net devices under 0000:09:00.0: cvl_0_0 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:15.792 Found net devices under 0000:09:00.1: cvl_0_1 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.792 15:59:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:15.792 00:23:15.792 --- 10.0.0.2 ping statistics --- 00:23:15.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.792 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:23:15.792 00:23:15.792 --- 10.0.0.1 ping statistics --- 00:23:15.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.792 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=97294 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 97294 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 97294 ']' 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.792 [2024-07-12 15:59:45.184987] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:23:15.792 [2024-07-12 15:59:45.185074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.792 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.792 [2024-07-12 15:59:45.249675] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.792 [2024-07-12 15:59:45.355885] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.792 [2024-07-12 15:59:45.355941] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.792 [2024-07-12 15:59:45.355963] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.792 [2024-07-12 15:59:45.355974] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.792 [2024-07-12 15:59:45.355983] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.792 [2024-07-12 15:59:45.356008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.792 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.793 [2024-07-12 15:59:45.501136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.793 [2024-07-12 15:59:45.509280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.793 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.052 null0 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.052 null1 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=97371 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 97371 /tmp/host.sock 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 97371 ']' 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:16.052 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:16.052 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.052 [2024-07-12 15:59:45.591506] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:23:16.052 [2024-07-12 15:59:45.591597] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97371 ] 00:23:16.052 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.052 [2024-07-12 15:59:45.654893] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.052 [2024-07-12 15:59:45.771130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.310 15:59:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:16.310 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.310 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.310 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.310 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.310 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.310 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.310 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 [2024-07-12 15:59:46.183104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.568 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:16.826 15:59:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:17.392 [2024-07-12 15:59:46.918870] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:17.392 [2024-07-12 15:59:46.918900] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:17.392 [2024-07-12 15:59:46.918923] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:17.392 [2024-07-12 15:59:47.047343] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:17.649 [2024-07-12 15:59:47.150287] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:17.649 [2024-07-12 15:59:47.150338] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.649 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:17.907 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.908 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.166 [2024-07-12 15:59:47.647461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.166 [2024-07-12 15:59:47.648206] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:18.166 [2024-07-12 15:59:47.648265] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.166 [2024-07-12 15:59:47.776092] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:18.166 15:59:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:18.166 [2024-07-12 15:59:47.834996] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:18.166 [2024-07-12 15:59:47.835017] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:18.166 [2024-07-12 15:59:47.835026] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:19.099 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:19.358 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.359 [2024-07-12 15:59:48.883979] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:19.359 [2024-07-12 15:59:48.884037] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:19.359 [2024-07-12 15:59:48.891817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.359 [2024-07-12 15:59:48.891853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.359 [2024-07-12 15:59:48.891884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.359 [2024-07-12 15:59:48.891900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.359 [2024-07-12 15:59:48.891914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.359 [2024-07-12 15:59:48.891928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.359 [2024-07-12 15:59:48.891944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.359 [2024-07-12 15:59:48.891958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:19.359 [2024-07-12 15:59:48.891973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbfe20 is same with the state(5) to be set 00:23:19.359 [2024-07-12 15:59:48.901811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbfe20 (9): Bad file descriptor 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.359 [2024-07-12 15:59:48.911854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.359 [2024-07-12 15:59:48.912163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.359 [2024-07-12 15:59:48.912198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbfe20 with addr=10.0.0.2, port=4420 00:23:19.359 [2024-07-12 15:59:48.912216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbfe20 is same with the state(5) to be set 00:23:19.359 [2024-07-12 15:59:48.912240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbfe20 (9): Bad file descriptor 00:23:19.359 [2024-07-12 15:59:48.912275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.359 [2024-07-12 15:59:48.912293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.359 [2024-07-12 15:59:48.912309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.359 [2024-07-12 15:59:48.912342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.359 [2024-07-12 15:59:48.921949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.359 [2024-07-12 15:59:48.922177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.359 [2024-07-12 15:59:48.922207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbfe20 with addr=10.0.0.2, port=4420 00:23:19.359 [2024-07-12 15:59:48.922224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbfe20 is same with the state(5) to be set 00:23:19.359 [2024-07-12 15:59:48.922246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbfe20 (9): Bad file descriptor 00:23:19.359 [2024-07-12 15:59:48.922267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.359 [2024-07-12 15:59:48.922281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.359 [2024-07-12 15:59:48.922295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.359 [2024-07-12 15:59:48.922353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.359 [2024-07-12 15:59:48.932019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.359 [2024-07-12 15:59:48.932253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.359 [2024-07-12 15:59:48.932282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbfe20 with addr=10.0.0.2, port=4420 00:23:19.359 [2024-07-12 15:59:48.932299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbfe20 is same with the state(5) to be set 00:23:19.359 [2024-07-12 15:59:48.932331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbfe20 (9): Bad file descriptor 00:23:19.359 [2024-07-12 15:59:48.932367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.359 [2024-07-12 15:59:48.932385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.359 [2024-07-12 15:59:48.932400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.359 [2024-07-12 15:59:48.932420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.359 [2024-07-12 15:59:48.942091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.359 [2024-07-12 15:59:48.942356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.359 [2024-07-12 15:59:48.942390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbfe20 with addr=10.0.0.2, port=4420 00:23:19.359 [2024-07-12 15:59:48.942407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbfe20 is same with the state(5) to be set 00:23:19.359 [2024-07-12 15:59:48.942431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbfe20 (9): Bad file descriptor 00:23:19.359 [2024-07-12 15:59:48.942452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.359 [2024-07-12 15:59:48.942466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.359 [2024-07-12 15:59:48.942481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.359 [2024-07-12 15:59:48.942501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.359 [2024-07-12 15:59:48.952163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.359 [2024-07-12 15:59:48.952356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.359 [2024-07-12 15:59:48.952385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbfe20 with addr=10.0.0.2, port=4420 00:23:19.359 [2024-07-12 15:59:48.952403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbfe20 is same with the state(5) to be set 00:23:19.359 [2024-07-12 15:59:48.952425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbfe20 (9): Bad file descriptor 00:23:19.359 [2024-07-12 15:59:48.952446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.359 [2024-07-12 15:59:48.952460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.359 [2024-07-12 15:59:48.952475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.359 [2024-07-12 15:59:48.952494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.359 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.359 [2024-07-12 15:59:48.962233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:19.359 [2024-07-12 15:59:48.962499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.359 [2024-07-12 15:59:48.962528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbfe20 with addr=10.0.0.2, port=4420 00:23:19.359 [2024-07-12 15:59:48.962545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbfe20 is same with the state(5) to be set 00:23:19.359 [2024-07-12 15:59:48.962568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbfe20 (9): Bad file descriptor 00:23:19.359 [2024-07-12 15:59:48.962589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:19.359 [2024-07-12 15:59:48.962620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:19.359 [2024-07-12 15:59:48.962634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:19.359 [2024-07-12 15:59:48.962683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.359 [2024-07-12 15:59:48.970661] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:19.360 [2024-07-12 15:59:48.970693] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:19.360 15:59:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:19.360 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.618 15:59:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.551 [2024-07-12 15:59:50.248136] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:20.551 [2024-07-12 15:59:50.248181] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:20.551 [2024-07-12 15:59:50.248204] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:20.808 [2024-07-12 15:59:50.335562] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:20.808 [2024-07-12 15:59:50.402634] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:20.808 [2024-07-12 15:59:50.402678] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.808 request: 00:23:20.808 { 00:23:20.808 "name": "nvme", 00:23:20.808 "trtype": "tcp", 00:23:20.808 "traddr": "10.0.0.2", 00:23:20.808 "adrfam": "ipv4", 00:23:20.808 "trsvcid": "8009", 00:23:20.808 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:20.808 "wait_for_attach": true, 00:23:20.808 "method": "bdev_nvme_start_discovery", 00:23:20.808 "req_id": 1 00:23:20.808 } 00:23:20.808 Got JSON-RPC error response 00:23:20.808 response: 00:23:20.808 { 00:23:20.808 "code": -17, 00:23:20.808 "message": "File exists" 00:23:20.808 } 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:20.808 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 request: 00:23:20.809 { 00:23:20.809 "name": "nvme_second", 00:23:20.809 "trtype": "tcp", 00:23:20.809 "traddr": "10.0.0.2", 00:23:20.809 "adrfam": "ipv4", 00:23:20.809 "trsvcid": "8009", 00:23:20.809 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:20.809 "wait_for_attach": true, 00:23:20.809 "method": "bdev_nvme_start_discovery", 00:23:20.809 "req_id": 1 00:23:20.809 } 00:23:20.809 Got JSON-RPC error response 00:23:20.809 response: 00:23:20.809 { 00:23:20.809 "code": -17, 00:23:20.809 "message": "File exists" 00:23:20.809 } 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:20.809 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:21.066 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:21.067 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.067 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:21.067 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:21.067 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:21.067 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.067 15:59:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:21.999 [2024-07-12 15:59:51.598940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.999 [2024-07-12 15:59:51.598984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc0ba0 with addr=10.0.0.2, port=8010 00:23:21.999 [2024-07-12 15:59:51.599010] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:21.999 [2024-07-12 15:59:51.599023] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:21.999 [2024-07-12 15:59:51.599035] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:22.960 [2024-07-12 15:59:52.601424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.960 [2024-07-12 15:59:52.601488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc0ba0 with addr=10.0.0.2, port=8010 00:23:22.960 [2024-07-12 15:59:52.601520] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:22.960 [2024-07-12 15:59:52.601534] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:22.960 [2024-07-12 15:59:52.601548] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:23.890 [2024-07-12 15:59:53.603584] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:23.890 request: 00:23:23.890 { 00:23:23.890 "name": "nvme_second", 00:23:23.890 "trtype": "tcp", 00:23:23.890 "traddr": "10.0.0.2", 00:23:23.890 "adrfam": "ipv4", 00:23:23.890 "trsvcid": "8010", 00:23:23.890 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:23.890 "wait_for_attach": false, 00:23:23.890 "attach_timeout_ms": 3000, 00:23:23.890 "method": "bdev_nvme_start_discovery", 00:23:23.890 "req_id": 1 00:23:23.890 } 00:23:23.890 Got JSON-RPC error response 00:23:23.890 response: 00:23:23.890 { 00:23:23.890 "code": -110, 00:23:23.890 "message": "Connection timed out" 00:23:23.890 } 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:23.890 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.147 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:24.147 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:24.147 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 97371 00:23:24.147 15:59:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:24.147 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.147 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:24.147 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.148 rmmod nvme_tcp 00:23:24.148 rmmod nvme_fabrics 00:23:24.148 rmmod nvme_keyring 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 97294 ']' 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 97294 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 97294 ']' 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 97294 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97294 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97294' 00:23:24.148 killing process with pid 97294 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 97294 00:23:24.148 15:59:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 97294 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.406 15:59:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.939 00:23:26.939 real 0m13.167s 00:23:26.939 user 0m19.008s 00:23:26.939 sys 0m2.778s 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.939 ************************************ 00:23:26.939 END TEST nvmf_host_discovery 00:23:26.939 ************************************ 00:23:26.939 15:59:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:26.939 15:59:56 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:26.939 15:59:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:26.939 15:59:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.939 15:59:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.939 ************************************ 00:23:26.939 START TEST nvmf_host_multipath_status 00:23:26.939 ************************************ 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:26.939 * Looking for test storage... 00:23:26.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.939 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.940 15:59:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:28.840 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:28.840 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:28.840 Found net devices under 0000:09:00.0: cvl_0_0 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:28.840 Found net devices under 0000:09:00.1: cvl_0_1 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:28.840 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:28.841 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.841 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:23:28.841 00:23:28.841 --- 10.0.0.2 ping statistics --- 00:23:28.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.841 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.841 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.841 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:23:28.841 00:23:28.841 --- 10.0.0.1 ping statistics --- 00:23:28.841 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.841 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=100463 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 100463 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 100463 ']' 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.841 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:28.841 [2024-07-12 15:59:58.387031] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:23:28.841 [2024-07-12 15:59:58.387108] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.841 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.841 [2024-07-12 15:59:58.449667] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:28.841 [2024-07-12 15:59:58.559497] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.841 [2024-07-12 15:59:58.559551] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.841 [2024-07-12 15:59:58.559565] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.841 [2024-07-12 15:59:58.559576] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.841 [2024-07-12 15:59:58.559585] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.841 [2024-07-12 15:59:58.559638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.841 [2024-07-12 15:59:58.559643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=100463 00:23:29.098 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:29.354 [2024-07-12 15:59:58.921698] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.354 15:59:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:29.620 Malloc0 00:23:29.620 15:59:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:29.875 15:59:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.131 15:59:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.388 [2024-07-12 15:59:59.976647] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.388 15:59:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:30.644 [2024-07-12 16:00:00.229453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.644 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=100663 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 100663 /var/tmp/bdevperf.sock 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 100663 ']' 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:30.645 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.901 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.901 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:30.901 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:31.158 16:00:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:31.721 Nvme0n1 00:23:31.721 16:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:31.977 Nvme0n1 00:23:31.977 16:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:32.233 16:00:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:34.125 16:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:34.125 16:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:34.382 16:00:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:34.639 16:00:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:35.571 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:35.571 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.571 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.571 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.829 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.829 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:35.829 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.829 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.087 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.087 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.087 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.087 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.344 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.344 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.344 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.344 16:00:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:36.601 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.601 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:36.601 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.601 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.859 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.859 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:36.859 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.859 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.116 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.116 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:37.116 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:37.375 16:00:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:37.641 16:00:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:38.576 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:38.576 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:38.576 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.576 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:38.839 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:38.839 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:38.839 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.839 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.103 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.103 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.104 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.104 16:00:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.374 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.374 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.375 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.375 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.650 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.650 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.650 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.650 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:39.918 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.918 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:39.918 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.918 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.184 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.184 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:40.184 16:00:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.447 16:00:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:40.707 16:00:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:41.661 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:41.661 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:41.661 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.661 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.929 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.930 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:41.930 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.930 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:42.199 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:42.199 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:42.199 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.199 16:00:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.462 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.462 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.462 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.462 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.737 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.738 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.738 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.738 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:43.002 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.002 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:43.002 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.002 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:43.268 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:43.268 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:43.268 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:43.595 16:00:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:43.595 16:00:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:44.552 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:44.552 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.552 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.552 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.817 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.817 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.817 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.817 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:45.078 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.079 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:45.079 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.079 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:45.335 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.335 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:45.335 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.335 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.590 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.590 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.590 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.590 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.846 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.846 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:45.846 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.846 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:46.103 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.103 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:46.103 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:46.361 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:46.618 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:47.551 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:47.551 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.551 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.551 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.808 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.808 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:47.808 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.808 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.065 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.065 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.065 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.065 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.323 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.323 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.323 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.323 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.579 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.579 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:48.579 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.580 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.836 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.837 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:48.837 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.837 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.093 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.093 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:49.093 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:49.350 16:00:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.607 16:00:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:50.538 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:50.538 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.538 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.538 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.796 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.796 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:50.796 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.796 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.054 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.054 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.054 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.054 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.333 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.333 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.333 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.333 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.590 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.590 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:51.590 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.590 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.848 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:51.848 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.848 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.848 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.105 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.105 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:52.362 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:52.362 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:52.619 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:52.876 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:53.808 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:53.808 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.808 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.808 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.065 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.065 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.065 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.065 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.322 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.322 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.322 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.322 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.579 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.579 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.579 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.579 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.836 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.836 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.836 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.836 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.093 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.093 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.093 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.093 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.350 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.350 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:55.350 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.607 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:55.864 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:56.795 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:56.795 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:56.795 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.795 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.053 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:57.053 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.053 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.053 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.310 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.310 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.310 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.310 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.568 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.568 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.568 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.568 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.825 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.825 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.825 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.825 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.082 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.082 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:58.082 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.082 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.340 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.340 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:58.340 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.597 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:58.854 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:59.786 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:59.786 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.786 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.786 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.043 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.043 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:00.043 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.043 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.301 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.301 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.301 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.301 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.866 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.124 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.124 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.124 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.124 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.381 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.381 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:01.381 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:01.639 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:01.896 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:02.829 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:02.829 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:02.829 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:02.829 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.087 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.087 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:03.087 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.087 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.344 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:03.344 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.344 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.344 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.601 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.601 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.601 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.601 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.860 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.860 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.860 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.860 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:04.123 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:04.123 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:04.123 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:04.123 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 100663 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 100663 ']' 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 100663 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100663 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100663' 00:24:04.380 killing process with pid 100663 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 100663 00:24:04.380 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 100663 00:24:04.670 Connection closed with partial response: 00:24:04.670 00:24:04.670 00:24:04.670 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 100663 00:24:04.670 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:04.670 [2024-07-12 16:00:00.290368] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:24:04.670 [2024-07-12 16:00:00.290456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100663 ] 00:24:04.670 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.670 [2024-07-12 16:00:00.350364] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.670 [2024-07-12 16:00:00.463904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.670 Running I/O for 90 seconds... 00:24:04.670 [2024-07-12 16:00:15.965489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.965627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.965674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.965712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.965749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.965786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.965823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.965860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.965876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.966091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.966138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.966188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.966229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.670 [2024-07-12 16:00:15.966927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.966965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.966987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.967003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:04.670 [2024-07-12 16:00:15.967025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.670 [2024-07-12 16:00:15.967041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.967563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.967975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.967998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.671 [2024-07-12 16:00:15.968544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.968963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.968989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.969006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.969032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.969048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.969074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.969090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.969116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.969132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.969159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.969175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.969201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.969221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:04.671 [2024-07-12 16:00:15.969249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.671 [2024-07-12 16:00:15.969266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.969961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.969977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.970957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.970986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.971002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.971029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.971045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.971072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.971088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.971117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.971133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.971160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.971176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:04.672 [2024-07-12 16:00:15.971204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.672 [2024-07-12 16:00:15.971220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:15.971265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:15.971309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:15.971363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:15.971407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:15.971451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:15.971500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:15.971544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:15.971589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:15.971633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:15.971677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:15.971721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:15.971765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:15.971793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:15.971810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.530974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.531961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.531983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:31.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.532020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:31.532036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.532058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:31.532074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.532097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.673 [2024-07-12 16:00:31.532113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.532757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.532783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.532811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.673 [2024-07-12 16:00:31.532829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:04.673 [2024-07-12 16:00:31.532851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.532868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.532890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.532906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.532928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.532950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.532973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.532989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.674 [2024-07-12 16:00:31.533521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.674 [2024-07-12 16:00:31.533558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.533971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.533993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.534009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.534031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.534047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.534068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.534084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.534106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.534122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:04.674 [2024-07-12 16:00:31.534143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.674 [2024-07-12 16:00:31.534159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.534180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.675 [2024-07-12 16:00:31.534196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.534218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.675 [2024-07-12 16:00:31.534250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.534274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.675 [2024-07-12 16:00:31.534291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.534313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.675 [2024-07-12 16:00:31.534341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.534963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.534988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:04.675 [2024-07-12 16:00:31.535385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.675 [2024-07-12 16:00:31.535423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.675 [2024-07-12 16:00:31.535461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:04.675 [2024-07-12 16:00:31.535483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:04.675 [2024-07-12 16:00:31.535500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:04.675 Received shutdown signal, test time was about 32.207380 seconds 00:24:04.675 00:24:04.675 Latency(us) 00:24:04.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.675 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:04.675 Verification LBA range: start 0x0 length 0x4000 00:24:04.675 Nvme0n1 : 32.21 7852.94 30.68 0.00 0.00 16272.74 512.76 4026531.84 00:24:04.675 =================================================================================================================== 00:24:04.675 Total : 7852.94 30.68 0.00 0.00 16272.74 512.76 4026531.84 00:24:04.675 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.933 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.933 rmmod nvme_tcp 00:24:04.933 rmmod nvme_fabrics 00:24:05.190 rmmod nvme_keyring 00:24:05.190 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.190 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 100463 ']' 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 100463 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 100463 ']' 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 100463 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100463 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100463' 00:24:05.191 killing process with pid 100463 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 100463 00:24:05.191 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 100463 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.449 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.354 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.354 00:24:07.354 real 0m40.949s 00:24:07.354 user 2m1.898s 00:24:07.354 sys 0m11.011s 00:24:07.354 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.354 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.354 ************************************ 00:24:07.354 END TEST nvmf_host_multipath_status 00:24:07.354 ************************************ 00:24:07.354 16:00:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:07.354 16:00:37 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:07.354 16:00:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:07.354 16:00:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.612 16:00:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:07.612 ************************************ 00:24:07.612 START TEST nvmf_discovery_remove_ifc 00:24:07.612 ************************************ 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:07.612 * Looking for test storage... 00:24:07.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.612 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.613 16:00:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:09.514 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:09.514 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:09.514 Found net devices under 0000:09:00.0: cvl_0_0 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.514 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.773 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.773 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.773 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.773 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.773 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:09.774 Found net devices under 0000:09:00.1: cvl_0_1 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:24:09.774 00:24:09.774 --- 10.0.0.2 ping statistics --- 00:24:09.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.774 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:09.774 00:24:09.774 --- 10.0.0.1 ping statistics --- 00:24:09.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.774 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=107455 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 107455 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 107455 ']' 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.774 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.774 [2024-07-12 16:00:39.452546] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:24:09.774 [2024-07-12 16:00:39.452639] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.774 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.032 [2024-07-12 16:00:39.512758] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.032 [2024-07-12 16:00:39.616057] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.032 [2024-07-12 16:00:39.616108] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.032 [2024-07-12 16:00:39.616136] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.032 [2024-07-12 16:00:39.616146] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.032 [2024-07-12 16:00:39.616156] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.032 [2024-07-12 16:00:39.616194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.032 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.032 [2024-07-12 16:00:39.755001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.289 [2024-07-12 16:00:39.763148] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:10.289 null0 00:24:10.289 [2024-07-12 16:00:39.795126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.289 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=107596 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 107596 /tmp/host.sock 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 107596 ']' 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:10.290 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.290 16:00:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.290 [2024-07-12 16:00:39.856541] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:24:10.290 [2024-07-12 16:00:39.856619] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107596 ] 00:24:10.290 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.290 [2024-07-12 16:00:39.912399] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.547 [2024-07-12 16:00:40.019101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.547 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.548 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:10.548 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.548 16:00:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.919 [2024-07-12 16:00:41.223323] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:11.919 [2024-07-12 16:00:41.223346] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:11.919 [2024-07-12 16:00:41.223368] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.919 [2024-07-12 16:00:41.350826] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:11.919 [2024-07-12 16:00:41.454379] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:11.919 [2024-07-12 16:00:41.454434] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:11.919 [2024-07-12 16:00:41.454471] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:11.919 [2024-07-12 16:00:41.454493] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:11.919 [2024-07-12 16:00:41.454516] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.919 [2024-07-12 16:00:41.461641] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23539d0 was disconnected and freed. delete nvme_qpair. 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:11.919 16:00:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.310 16:00:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.248 16:00:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:15.180 16:00:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:16.112 16:00:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.043 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.043 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.043 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.043 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.043 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.043 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.043 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.300 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.300 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:17.300 16:00:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.300 [2024-07-12 16:00:46.896053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:17.300 [2024-07-12 16:00:46.896130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.300 [2024-07-12 16:00:46.896150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.300 [2024-07-12 16:00:46.896167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.300 [2024-07-12 16:00:46.896181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.300 [2024-07-12 16:00:46.896201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.300 [2024-07-12 16:00:46.896214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.300 [2024-07-12 16:00:46.896227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.300 [2024-07-12 16:00:46.896240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.300 [2024-07-12 16:00:46.896253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.300 [2024-07-12 16:00:46.896265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.300 [2024-07-12 16:00:46.896278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a4e0 is same with the state(5) to be set 00:24:17.300 [2024-07-12 16:00:46.906072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231a4e0 (9): Bad file descriptor 00:24:17.300 [2024-07-12 16:00:46.916118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:18.226 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:18.226 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:18.226 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:18.226 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.226 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:18.226 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:18.226 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:18.484 [2024-07-12 16:00:47.963349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:18.484 [2024-07-12 16:00:47.963421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x231a4e0 with addr=10.0.0.2, port=4420 00:24:18.484 [2024-07-12 16:00:47.963445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x231a4e0 is same with the state(5) to be set 00:24:18.484 [2024-07-12 16:00:47.963486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x231a4e0 (9): Bad file descriptor 00:24:18.484 [2024-07-12 16:00:47.963927] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:18.484 [2024-07-12 16:00:47.963956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:18.484 [2024-07-12 16:00:47.963971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:18.484 [2024-07-12 16:00:47.963985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:18.484 [2024-07-12 16:00:47.964016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:18.484 [2024-07-12 16:00:47.964032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:18.484 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.484 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:18.484 16:00:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:19.414 [2024-07-12 16:00:48.966551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:19.414 [2024-07-12 16:00:48.966617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:19.414 [2024-07-12 16:00:48.966632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:19.414 [2024-07-12 16:00:48.966654] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:19.414 [2024-07-12 16:00:48.966685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.414 [2024-07-12 16:00:48.966726] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:19.414 [2024-07-12 16:00:48.966783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.414 [2024-07-12 16:00:48.966803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.414 [2024-07-12 16:00:48.966820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.414 [2024-07-12 16:00:48.966833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.414 [2024-07-12 16:00:48.966846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.414 [2024-07-12 16:00:48.966859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.414 [2024-07-12 16:00:48.966873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.414 [2024-07-12 16:00:48.966885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.414 [2024-07-12 16:00:48.966900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.414 [2024-07-12 16:00:48.966912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.414 [2024-07-12 16:00:48.966925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:19.414 [2024-07-12 16:00:48.967012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2319960 (9): Bad file descriptor 00:24:19.414 [2024-07-12 16:00:48.968006] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:19.414 [2024-07-12 16:00:48.968026] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.414 16:00:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:19.414 16:00:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:20.822 16:00:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.388 [2024-07-12 16:00:51.019164] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.388 [2024-07-12 16:00:51.019197] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.388 [2024-07-12 16:00:51.019220] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.645 [2024-07-12 16:00:51.146641] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:21.645 16:00:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.645 [2024-07-12 16:00:51.329939] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:21.645 [2024-07-12 16:00:51.329983] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:21.645 [2024-07-12 16:00:51.330015] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:21.645 [2024-07-12 16:00:51.330038] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:21.645 [2024-07-12 16:00:51.330052] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:21.645 [2024-07-12 16:00:51.337639] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x235d2a0 was disconnected and freed. delete nvme_qpair. 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 107596 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 107596 ']' 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 107596 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107596 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107596' 00:24:22.577 killing process with pid 107596 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 107596 00:24:22.577 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 107596 00:24:22.834 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:22.834 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.834 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:22.834 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.834 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:22.834 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.834 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.834 rmmod nvme_tcp 00:24:22.834 rmmod nvme_fabrics 00:24:23.091 rmmod nvme_keyring 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 107455 ']' 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 107455 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 107455 ']' 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 107455 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 107455 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 107455' 00:24:23.091 killing process with pid 107455 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 107455 00:24:23.091 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 107455 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.349 16:00:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.256 16:00:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.256 00:24:25.256 real 0m17.833s 00:24:25.256 user 0m25.702s 00:24:25.256 sys 0m3.122s 00:24:25.256 16:00:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.256 16:00:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.256 ************************************ 00:24:25.256 END TEST nvmf_discovery_remove_ifc 00:24:25.256 ************************************ 00:24:25.256 16:00:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:25.256 16:00:54 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:25.256 16:00:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:25.256 16:00:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.256 16:00:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.514 ************************************ 00:24:25.514 START TEST nvmf_identify_kernel_target 00:24:25.514 ************************************ 00:24:25.514 16:00:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:25.514 * Looking for test storage... 00:24:25.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.514 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.515 16:00:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:27.415 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:27.415 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:27.415 Found net devices under 0000:09:00.0: cvl_0_0 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:27.415 Found net devices under 0000:09:00.1: cvl_0_1 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.415 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.416 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:24:27.676 00:24:27.676 --- 10.0.0.2 ping statistics --- 00:24:27.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.676 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:24:27.676 00:24:27.676 --- 10.0.0.1 ping statistics --- 00:24:27.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.676 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:27.676 16:00:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:29.051 Waiting for block devices as requested 00:24:29.051 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:29.051 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:29.051 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:29.309 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:29.309 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:29.309 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:29.309 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:29.566 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:29.566 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:29.866 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:29.866 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:29.866 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:29.866 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:30.126 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:30.126 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:30.126 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:30.126 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:30.384 No valid GPT data, bailing 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:30.384 16:00:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:30.384 00:24:30.384 Discovery Log Number of Records 2, Generation counter 2 00:24:30.384 =====Discovery Log Entry 0====== 00:24:30.384 trtype: tcp 00:24:30.384 adrfam: ipv4 00:24:30.384 subtype: current discovery subsystem 00:24:30.384 treq: not specified, sq flow control disable supported 00:24:30.384 portid: 1 00:24:30.384 trsvcid: 4420 00:24:30.384 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:30.384 traddr: 10.0.0.1 00:24:30.384 eflags: none 00:24:30.384 sectype: none 00:24:30.384 =====Discovery Log Entry 1====== 00:24:30.384 trtype: tcp 00:24:30.384 adrfam: ipv4 00:24:30.384 subtype: nvme subsystem 00:24:30.384 treq: not specified, sq flow control disable supported 00:24:30.384 portid: 1 00:24:30.384 trsvcid: 4420 00:24:30.384 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:30.384 traddr: 10.0.0.1 00:24:30.384 eflags: none 00:24:30.384 sectype: none 00:24:30.384 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:30.384 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:30.644 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.644 ===================================================== 00:24:30.644 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:30.644 ===================================================== 00:24:30.644 Controller Capabilities/Features 00:24:30.644 ================================ 00:24:30.644 Vendor ID: 0000 00:24:30.644 Subsystem Vendor ID: 0000 00:24:30.644 Serial Number: f2729e5bb3c70373b772 00:24:30.644 Model Number: Linux 00:24:30.644 Firmware Version: 6.7.0-68 00:24:30.644 Recommended Arb Burst: 0 00:24:30.644 IEEE OUI Identifier: 00 00 00 00:24:30.644 Multi-path I/O 00:24:30.644 May have multiple subsystem ports: No 00:24:30.644 May have multiple controllers: No 00:24:30.644 Associated with SR-IOV VF: No 00:24:30.644 Max Data Transfer Size: Unlimited 00:24:30.644 Max Number of Namespaces: 0 00:24:30.644 Max Number of I/O Queues: 1024 00:24:30.644 NVMe Specification Version (VS): 1.3 00:24:30.644 NVMe Specification Version (Identify): 1.3 00:24:30.644 Maximum Queue Entries: 1024 00:24:30.644 Contiguous Queues Required: No 00:24:30.644 Arbitration Mechanisms Supported 00:24:30.644 Weighted Round Robin: Not Supported 00:24:30.644 Vendor Specific: Not Supported 00:24:30.644 Reset Timeout: 7500 ms 00:24:30.644 Doorbell Stride: 4 bytes 00:24:30.644 NVM Subsystem Reset: Not Supported 00:24:30.644 Command Sets Supported 00:24:30.644 NVM Command Set: Supported 00:24:30.644 Boot Partition: Not Supported 00:24:30.644 Memory Page Size Minimum: 4096 bytes 00:24:30.644 Memory Page Size Maximum: 4096 bytes 00:24:30.644 Persistent Memory Region: Not Supported 00:24:30.644 Optional Asynchronous Events Supported 00:24:30.644 Namespace Attribute Notices: Not Supported 00:24:30.644 Firmware Activation Notices: Not Supported 00:24:30.644 ANA Change Notices: Not Supported 00:24:30.644 PLE Aggregate Log Change Notices: Not Supported 00:24:30.644 LBA Status Info Alert Notices: Not Supported 00:24:30.644 EGE Aggregate Log Change Notices: Not Supported 00:24:30.644 Normal NVM Subsystem Shutdown event: Not Supported 00:24:30.644 Zone Descriptor Change Notices: Not Supported 00:24:30.644 Discovery Log Change Notices: Supported 00:24:30.644 Controller Attributes 00:24:30.644 128-bit Host Identifier: Not Supported 00:24:30.644 Non-Operational Permissive Mode: Not Supported 00:24:30.644 NVM Sets: Not Supported 00:24:30.644 Read Recovery Levels: Not Supported 00:24:30.644 Endurance Groups: Not Supported 00:24:30.644 Predictable Latency Mode: Not Supported 00:24:30.644 Traffic Based Keep ALive: Not Supported 00:24:30.644 Namespace Granularity: Not Supported 00:24:30.644 SQ Associations: Not Supported 00:24:30.644 UUID List: Not Supported 00:24:30.644 Multi-Domain Subsystem: Not Supported 00:24:30.644 Fixed Capacity Management: Not Supported 00:24:30.644 Variable Capacity Management: Not Supported 00:24:30.644 Delete Endurance Group: Not Supported 00:24:30.644 Delete NVM Set: Not Supported 00:24:30.644 Extended LBA Formats Supported: Not Supported 00:24:30.644 Flexible Data Placement Supported: Not Supported 00:24:30.644 00:24:30.644 Controller Memory Buffer Support 00:24:30.644 ================================ 00:24:30.644 Supported: No 00:24:30.644 00:24:30.644 Persistent Memory Region Support 00:24:30.644 ================================ 00:24:30.644 Supported: No 00:24:30.644 00:24:30.644 Admin Command Set Attributes 00:24:30.644 ============================ 00:24:30.644 Security Send/Receive: Not Supported 00:24:30.644 Format NVM: Not Supported 00:24:30.644 Firmware Activate/Download: Not Supported 00:24:30.644 Namespace Management: Not Supported 00:24:30.644 Device Self-Test: Not Supported 00:24:30.644 Directives: Not Supported 00:24:30.644 NVMe-MI: Not Supported 00:24:30.644 Virtualization Management: Not Supported 00:24:30.644 Doorbell Buffer Config: Not Supported 00:24:30.644 Get LBA Status Capability: Not Supported 00:24:30.644 Command & Feature Lockdown Capability: Not Supported 00:24:30.644 Abort Command Limit: 1 00:24:30.644 Async Event Request Limit: 1 00:24:30.644 Number of Firmware Slots: N/A 00:24:30.644 Firmware Slot 1 Read-Only: N/A 00:24:30.644 Firmware Activation Without Reset: N/A 00:24:30.644 Multiple Update Detection Support: N/A 00:24:30.644 Firmware Update Granularity: No Information Provided 00:24:30.644 Per-Namespace SMART Log: No 00:24:30.644 Asymmetric Namespace Access Log Page: Not Supported 00:24:30.644 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:30.644 Command Effects Log Page: Not Supported 00:24:30.644 Get Log Page Extended Data: Supported 00:24:30.644 Telemetry Log Pages: Not Supported 00:24:30.644 Persistent Event Log Pages: Not Supported 00:24:30.644 Supported Log Pages Log Page: May Support 00:24:30.644 Commands Supported & Effects Log Page: Not Supported 00:24:30.644 Feature Identifiers & Effects Log Page:May Support 00:24:30.644 NVMe-MI Commands & Effects Log Page: May Support 00:24:30.644 Data Area 4 for Telemetry Log: Not Supported 00:24:30.644 Error Log Page Entries Supported: 1 00:24:30.644 Keep Alive: Not Supported 00:24:30.644 00:24:30.645 NVM Command Set Attributes 00:24:30.645 ========================== 00:24:30.645 Submission Queue Entry Size 00:24:30.645 Max: 1 00:24:30.645 Min: 1 00:24:30.645 Completion Queue Entry Size 00:24:30.645 Max: 1 00:24:30.645 Min: 1 00:24:30.645 Number of Namespaces: 0 00:24:30.645 Compare Command: Not Supported 00:24:30.645 Write Uncorrectable Command: Not Supported 00:24:30.645 Dataset Management Command: Not Supported 00:24:30.645 Write Zeroes Command: Not Supported 00:24:30.645 Set Features Save Field: Not Supported 00:24:30.645 Reservations: Not Supported 00:24:30.645 Timestamp: Not Supported 00:24:30.645 Copy: Not Supported 00:24:30.645 Volatile Write Cache: Not Present 00:24:30.645 Atomic Write Unit (Normal): 1 00:24:30.645 Atomic Write Unit (PFail): 1 00:24:30.645 Atomic Compare & Write Unit: 1 00:24:30.645 Fused Compare & Write: Not Supported 00:24:30.645 Scatter-Gather List 00:24:30.645 SGL Command Set: Supported 00:24:30.645 SGL Keyed: Not Supported 00:24:30.645 SGL Bit Bucket Descriptor: Not Supported 00:24:30.645 SGL Metadata Pointer: Not Supported 00:24:30.645 Oversized SGL: Not Supported 00:24:30.645 SGL Metadata Address: Not Supported 00:24:30.645 SGL Offset: Supported 00:24:30.645 Transport SGL Data Block: Not Supported 00:24:30.645 Replay Protected Memory Block: Not Supported 00:24:30.645 00:24:30.645 Firmware Slot Information 00:24:30.645 ========================= 00:24:30.645 Active slot: 0 00:24:30.645 00:24:30.645 00:24:30.645 Error Log 00:24:30.645 ========= 00:24:30.645 00:24:30.645 Active Namespaces 00:24:30.645 ================= 00:24:30.645 Discovery Log Page 00:24:30.645 ================== 00:24:30.645 Generation Counter: 2 00:24:30.645 Number of Records: 2 00:24:30.645 Record Format: 0 00:24:30.645 00:24:30.645 Discovery Log Entry 0 00:24:30.645 ---------------------- 00:24:30.645 Transport Type: 3 (TCP) 00:24:30.645 Address Family: 1 (IPv4) 00:24:30.645 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:30.645 Entry Flags: 00:24:30.645 Duplicate Returned Information: 0 00:24:30.645 Explicit Persistent Connection Support for Discovery: 0 00:24:30.645 Transport Requirements: 00:24:30.645 Secure Channel: Not Specified 00:24:30.645 Port ID: 1 (0x0001) 00:24:30.645 Controller ID: 65535 (0xffff) 00:24:30.645 Admin Max SQ Size: 32 00:24:30.645 Transport Service Identifier: 4420 00:24:30.645 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:30.645 Transport Address: 10.0.0.1 00:24:30.645 Discovery Log Entry 1 00:24:30.645 ---------------------- 00:24:30.645 Transport Type: 3 (TCP) 00:24:30.645 Address Family: 1 (IPv4) 00:24:30.645 Subsystem Type: 2 (NVM Subsystem) 00:24:30.645 Entry Flags: 00:24:30.645 Duplicate Returned Information: 0 00:24:30.645 Explicit Persistent Connection Support for Discovery: 0 00:24:30.645 Transport Requirements: 00:24:30.645 Secure Channel: Not Specified 00:24:30.645 Port ID: 1 (0x0001) 00:24:30.645 Controller ID: 65535 (0xffff) 00:24:30.645 Admin Max SQ Size: 32 00:24:30.645 Transport Service Identifier: 4420 00:24:30.645 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:30.645 Transport Address: 10.0.0.1 00:24:30.645 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.645 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.645 get_feature(0x01) failed 00:24:30.645 get_feature(0x02) failed 00:24:30.645 get_feature(0x04) failed 00:24:30.645 ===================================================== 00:24:30.645 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:30.645 ===================================================== 00:24:30.645 Controller Capabilities/Features 00:24:30.645 ================================ 00:24:30.645 Vendor ID: 0000 00:24:30.645 Subsystem Vendor ID: 0000 00:24:30.645 Serial Number: 25ef094a972b199fa87c 00:24:30.645 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:30.645 Firmware Version: 6.7.0-68 00:24:30.645 Recommended Arb Burst: 6 00:24:30.645 IEEE OUI Identifier: 00 00 00 00:24:30.645 Multi-path I/O 00:24:30.645 May have multiple subsystem ports: Yes 00:24:30.645 May have multiple controllers: Yes 00:24:30.645 Associated with SR-IOV VF: No 00:24:30.645 Max Data Transfer Size: Unlimited 00:24:30.645 Max Number of Namespaces: 1024 00:24:30.645 Max Number of I/O Queues: 128 00:24:30.645 NVMe Specification Version (VS): 1.3 00:24:30.645 NVMe Specification Version (Identify): 1.3 00:24:30.645 Maximum Queue Entries: 1024 00:24:30.645 Contiguous Queues Required: No 00:24:30.645 Arbitration Mechanisms Supported 00:24:30.645 Weighted Round Robin: Not Supported 00:24:30.645 Vendor Specific: Not Supported 00:24:30.645 Reset Timeout: 7500 ms 00:24:30.645 Doorbell Stride: 4 bytes 00:24:30.645 NVM Subsystem Reset: Not Supported 00:24:30.645 Command Sets Supported 00:24:30.645 NVM Command Set: Supported 00:24:30.645 Boot Partition: Not Supported 00:24:30.645 Memory Page Size Minimum: 4096 bytes 00:24:30.645 Memory Page Size Maximum: 4096 bytes 00:24:30.645 Persistent Memory Region: Not Supported 00:24:30.645 Optional Asynchronous Events Supported 00:24:30.645 Namespace Attribute Notices: Supported 00:24:30.645 Firmware Activation Notices: Not Supported 00:24:30.645 ANA Change Notices: Supported 00:24:30.645 PLE Aggregate Log Change Notices: Not Supported 00:24:30.645 LBA Status Info Alert Notices: Not Supported 00:24:30.645 EGE Aggregate Log Change Notices: Not Supported 00:24:30.645 Normal NVM Subsystem Shutdown event: Not Supported 00:24:30.645 Zone Descriptor Change Notices: Not Supported 00:24:30.645 Discovery Log Change Notices: Not Supported 00:24:30.645 Controller Attributes 00:24:30.645 128-bit Host Identifier: Supported 00:24:30.645 Non-Operational Permissive Mode: Not Supported 00:24:30.645 NVM Sets: Not Supported 00:24:30.645 Read Recovery Levels: Not Supported 00:24:30.645 Endurance Groups: Not Supported 00:24:30.645 Predictable Latency Mode: Not Supported 00:24:30.645 Traffic Based Keep ALive: Supported 00:24:30.645 Namespace Granularity: Not Supported 00:24:30.645 SQ Associations: Not Supported 00:24:30.645 UUID List: Not Supported 00:24:30.645 Multi-Domain Subsystem: Not Supported 00:24:30.645 Fixed Capacity Management: Not Supported 00:24:30.645 Variable Capacity Management: Not Supported 00:24:30.645 Delete Endurance Group: Not Supported 00:24:30.645 Delete NVM Set: Not Supported 00:24:30.645 Extended LBA Formats Supported: Not Supported 00:24:30.645 Flexible Data Placement Supported: Not Supported 00:24:30.645 00:24:30.645 Controller Memory Buffer Support 00:24:30.645 ================================ 00:24:30.645 Supported: No 00:24:30.645 00:24:30.645 Persistent Memory Region Support 00:24:30.645 ================================ 00:24:30.645 Supported: No 00:24:30.645 00:24:30.645 Admin Command Set Attributes 00:24:30.645 ============================ 00:24:30.645 Security Send/Receive: Not Supported 00:24:30.645 Format NVM: Not Supported 00:24:30.645 Firmware Activate/Download: Not Supported 00:24:30.645 Namespace Management: Not Supported 00:24:30.645 Device Self-Test: Not Supported 00:24:30.645 Directives: Not Supported 00:24:30.645 NVMe-MI: Not Supported 00:24:30.645 Virtualization Management: Not Supported 00:24:30.645 Doorbell Buffer Config: Not Supported 00:24:30.645 Get LBA Status Capability: Not Supported 00:24:30.645 Command & Feature Lockdown Capability: Not Supported 00:24:30.645 Abort Command Limit: 4 00:24:30.645 Async Event Request Limit: 4 00:24:30.645 Number of Firmware Slots: N/A 00:24:30.645 Firmware Slot 1 Read-Only: N/A 00:24:30.645 Firmware Activation Without Reset: N/A 00:24:30.645 Multiple Update Detection Support: N/A 00:24:30.645 Firmware Update Granularity: No Information Provided 00:24:30.645 Per-Namespace SMART Log: Yes 00:24:30.645 Asymmetric Namespace Access Log Page: Supported 00:24:30.645 ANA Transition Time : 10 sec 00:24:30.645 00:24:30.645 Asymmetric Namespace Access Capabilities 00:24:30.645 ANA Optimized State : Supported 00:24:30.645 ANA Non-Optimized State : Supported 00:24:30.645 ANA Inaccessible State : Supported 00:24:30.645 ANA Persistent Loss State : Supported 00:24:30.645 ANA Change State : Supported 00:24:30.645 ANAGRPID is not changed : No 00:24:30.645 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:30.645 00:24:30.645 ANA Group Identifier Maximum : 128 00:24:30.645 Number of ANA Group Identifiers : 128 00:24:30.645 Max Number of Allowed Namespaces : 1024 00:24:30.645 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:30.645 Command Effects Log Page: Supported 00:24:30.646 Get Log Page Extended Data: Supported 00:24:30.646 Telemetry Log Pages: Not Supported 00:24:30.646 Persistent Event Log Pages: Not Supported 00:24:30.646 Supported Log Pages Log Page: May Support 00:24:30.646 Commands Supported & Effects Log Page: Not Supported 00:24:30.646 Feature Identifiers & Effects Log Page:May Support 00:24:30.646 NVMe-MI Commands & Effects Log Page: May Support 00:24:30.646 Data Area 4 for Telemetry Log: Not Supported 00:24:30.646 Error Log Page Entries Supported: 128 00:24:30.646 Keep Alive: Supported 00:24:30.646 Keep Alive Granularity: 1000 ms 00:24:30.646 00:24:30.646 NVM Command Set Attributes 00:24:30.646 ========================== 00:24:30.646 Submission Queue Entry Size 00:24:30.646 Max: 64 00:24:30.646 Min: 64 00:24:30.646 Completion Queue Entry Size 00:24:30.646 Max: 16 00:24:30.646 Min: 16 00:24:30.646 Number of Namespaces: 1024 00:24:30.646 Compare Command: Not Supported 00:24:30.646 Write Uncorrectable Command: Not Supported 00:24:30.646 Dataset Management Command: Supported 00:24:30.646 Write Zeroes Command: Supported 00:24:30.646 Set Features Save Field: Not Supported 00:24:30.646 Reservations: Not Supported 00:24:30.646 Timestamp: Not Supported 00:24:30.646 Copy: Not Supported 00:24:30.646 Volatile Write Cache: Present 00:24:30.646 Atomic Write Unit (Normal): 1 00:24:30.646 Atomic Write Unit (PFail): 1 00:24:30.646 Atomic Compare & Write Unit: 1 00:24:30.646 Fused Compare & Write: Not Supported 00:24:30.646 Scatter-Gather List 00:24:30.646 SGL Command Set: Supported 00:24:30.646 SGL Keyed: Not Supported 00:24:30.646 SGL Bit Bucket Descriptor: Not Supported 00:24:30.646 SGL Metadata Pointer: Not Supported 00:24:30.646 Oversized SGL: Not Supported 00:24:30.646 SGL Metadata Address: Not Supported 00:24:30.646 SGL Offset: Supported 00:24:30.646 Transport SGL Data Block: Not Supported 00:24:30.646 Replay Protected Memory Block: Not Supported 00:24:30.646 00:24:30.646 Firmware Slot Information 00:24:30.646 ========================= 00:24:30.646 Active slot: 0 00:24:30.646 00:24:30.646 Asymmetric Namespace Access 00:24:30.646 =========================== 00:24:30.646 Change Count : 0 00:24:30.646 Number of ANA Group Descriptors : 1 00:24:30.646 ANA Group Descriptor : 0 00:24:30.646 ANA Group ID : 1 00:24:30.646 Number of NSID Values : 1 00:24:30.646 Change Count : 0 00:24:30.646 ANA State : 1 00:24:30.646 Namespace Identifier : 1 00:24:30.646 00:24:30.646 Commands Supported and Effects 00:24:30.646 ============================== 00:24:30.646 Admin Commands 00:24:30.646 -------------- 00:24:30.646 Get Log Page (02h): Supported 00:24:30.646 Identify (06h): Supported 00:24:30.646 Abort (08h): Supported 00:24:30.646 Set Features (09h): Supported 00:24:30.646 Get Features (0Ah): Supported 00:24:30.646 Asynchronous Event Request (0Ch): Supported 00:24:30.646 Keep Alive (18h): Supported 00:24:30.646 I/O Commands 00:24:30.646 ------------ 00:24:30.646 Flush (00h): Supported 00:24:30.646 Write (01h): Supported LBA-Change 00:24:30.646 Read (02h): Supported 00:24:30.646 Write Zeroes (08h): Supported LBA-Change 00:24:30.646 Dataset Management (09h): Supported 00:24:30.646 00:24:30.646 Error Log 00:24:30.646 ========= 00:24:30.646 Entry: 0 00:24:30.646 Error Count: 0x3 00:24:30.646 Submission Queue Id: 0x0 00:24:30.646 Command Id: 0x5 00:24:30.646 Phase Bit: 0 00:24:30.646 Status Code: 0x2 00:24:30.646 Status Code Type: 0x0 00:24:30.646 Do Not Retry: 1 00:24:30.646 Error Location: 0x28 00:24:30.646 LBA: 0x0 00:24:30.646 Namespace: 0x0 00:24:30.646 Vendor Log Page: 0x0 00:24:30.646 ----------- 00:24:30.646 Entry: 1 00:24:30.646 Error Count: 0x2 00:24:30.646 Submission Queue Id: 0x0 00:24:30.646 Command Id: 0x5 00:24:30.646 Phase Bit: 0 00:24:30.646 Status Code: 0x2 00:24:30.646 Status Code Type: 0x0 00:24:30.646 Do Not Retry: 1 00:24:30.646 Error Location: 0x28 00:24:30.646 LBA: 0x0 00:24:30.646 Namespace: 0x0 00:24:30.646 Vendor Log Page: 0x0 00:24:30.646 ----------- 00:24:30.646 Entry: 2 00:24:30.646 Error Count: 0x1 00:24:30.646 Submission Queue Id: 0x0 00:24:30.646 Command Id: 0x4 00:24:30.646 Phase Bit: 0 00:24:30.646 Status Code: 0x2 00:24:30.646 Status Code Type: 0x0 00:24:30.646 Do Not Retry: 1 00:24:30.646 Error Location: 0x28 00:24:30.646 LBA: 0x0 00:24:30.646 Namespace: 0x0 00:24:30.646 Vendor Log Page: 0x0 00:24:30.646 00:24:30.646 Number of Queues 00:24:30.646 ================ 00:24:30.646 Number of I/O Submission Queues: 128 00:24:30.646 Number of I/O Completion Queues: 128 00:24:30.646 00:24:30.646 ZNS Specific Controller Data 00:24:30.646 ============================ 00:24:30.646 Zone Append Size Limit: 0 00:24:30.646 00:24:30.646 00:24:30.646 Active Namespaces 00:24:30.646 ================= 00:24:30.646 get_feature(0x05) failed 00:24:30.646 Namespace ID:1 00:24:30.646 Command Set Identifier: NVM (00h) 00:24:30.646 Deallocate: Supported 00:24:30.646 Deallocated/Unwritten Error: Not Supported 00:24:30.646 Deallocated Read Value: Unknown 00:24:30.646 Deallocate in Write Zeroes: Not Supported 00:24:30.646 Deallocated Guard Field: 0xFFFF 00:24:30.646 Flush: Supported 00:24:30.646 Reservation: Not Supported 00:24:30.646 Namespace Sharing Capabilities: Multiple Controllers 00:24:30.646 Size (in LBAs): 1953525168 (931GiB) 00:24:30.646 Capacity (in LBAs): 1953525168 (931GiB) 00:24:30.646 Utilization (in LBAs): 1953525168 (931GiB) 00:24:30.646 UUID: 18a796b6-fc2d-4988-b73b-ae8697e02b40 00:24:30.646 Thin Provisioning: Not Supported 00:24:30.646 Per-NS Atomic Units: Yes 00:24:30.646 Atomic Boundary Size (Normal): 0 00:24:30.646 Atomic Boundary Size (PFail): 0 00:24:30.646 Atomic Boundary Offset: 0 00:24:30.646 NGUID/EUI64 Never Reused: No 00:24:30.646 ANA group ID: 1 00:24:30.646 Namespace Write Protected: No 00:24:30.646 Number of LBA Formats: 1 00:24:30.646 Current LBA Format: LBA Format #00 00:24:30.646 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:30.646 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.646 rmmod nvme_tcp 00:24:30.646 rmmod nvme_fabrics 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.646 16:01:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:33.181 16:01:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:34.115 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.115 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.115 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.115 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.115 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.115 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.115 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.115 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:34.115 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:35.051 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:35.310 00:24:35.310 real 0m9.883s 00:24:35.310 user 0m2.069s 00:24:35.310 sys 0m3.762s 00:24:35.310 16:01:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.310 16:01:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.310 ************************************ 00:24:35.310 END TEST nvmf_identify_kernel_target 00:24:35.310 ************************************ 00:24:35.310 16:01:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:35.310 16:01:04 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.310 16:01:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:35.310 16:01:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.310 16:01:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.310 ************************************ 00:24:35.310 START TEST nvmf_auth_host 00:24:35.310 ************************************ 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:35.310 * Looking for test storage... 00:24:35.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.310 16:01:04 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:35.310 16:01:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:37.848 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:37.848 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:37.848 Found net devices under 0000:09:00.0: cvl_0_0 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.848 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:37.849 Found net devices under 0000:09:00.1: cvl_0_1 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:24:37.849 00:24:37.849 --- 10.0.0.2 ping statistics --- 00:24:37.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.849 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:24:37.849 00:24:37.849 --- 10.0.0.1 ping statistics --- 00:24:37.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.849 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=114684 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 114684 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 114684 ']' 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:37.849 16:01:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=53263e804c0e8f8ea44f6138ac0741fc 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Mn3 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 53263e804c0e8f8ea44f6138ac0741fc 0 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 53263e804c0e8f8ea44f6138ac0741fc 0 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=53263e804c0e8f8ea44f6138ac0741fc 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Mn3 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Mn3 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Mn3 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a6964754f35c87178408665823cd6e2b0222757cab82c322ed661497266e411 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.AzG 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a6964754f35c87178408665823cd6e2b0222757cab82c322ed661497266e411 3 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a6964754f35c87178408665823cd6e2b0222757cab82c322ed661497266e411 3 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a6964754f35c87178408665823cd6e2b0222757cab82c322ed661497266e411 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.AzG 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.AzG 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.AzG 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.107 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1759584a811a0508f5e7fa8bd34d51cf783fdad0864f880c 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aVT 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1759584a811a0508f5e7fa8bd34d51cf783fdad0864f880c 0 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1759584a811a0508f5e7fa8bd34d51cf783fdad0864f880c 0 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1759584a811a0508f5e7fa8bd34d51cf783fdad0864f880c 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aVT 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aVT 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.aVT 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b6d07bc14b57f4d6eb8ae3a5061f89f586cfc3b4544edef2 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.D5y 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b6d07bc14b57f4d6eb8ae3a5061f89f586cfc3b4544edef2 2 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b6d07bc14b57f4d6eb8ae3a5061f89f586cfc3b4544edef2 2 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b6d07bc14b57f4d6eb8ae3a5061f89f586cfc3b4544edef2 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.D5y 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.D5y 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.D5y 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e982272d024d6d1862f0b99136cd155 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.D0O 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e982272d024d6d1862f0b99136cd155 1 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e982272d024d6d1862f0b99136cd155 1 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e982272d024d6d1862f0b99136cd155 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:38.108 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.D0O 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.D0O 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.D0O 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=79eca8c983937083d6346ebd48449c42 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qkP 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 79eca8c983937083d6346ebd48449c42 1 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 79eca8c983937083d6346ebd48449c42 1 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=79eca8c983937083d6346ebd48449c42 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qkP 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qkP 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qkP 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f98e927647ca56a4a834ff4df7ff34b0bf43d4abaaac19b 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VIz 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f98e927647ca56a4a834ff4df7ff34b0bf43d4abaaac19b 2 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f98e927647ca56a4a834ff4df7ff34b0bf43d4abaaac19b 2 00:24:38.366 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f98e927647ca56a4a834ff4df7ff34b0bf43d4abaaac19b 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VIz 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VIz 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.VIz 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d25b5526ab25a533e688ccefd2259a6 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9d7 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d25b5526ab25a533e688ccefd2259a6 0 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d25b5526ab25a533e688ccefd2259a6 0 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d25b5526ab25a533e688ccefd2259a6 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:38.367 16:01:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9d7 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9d7 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.9d7 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5c0b86c4833653092865d313178fa2216b6cc346f0dca0b331c276607e5290b0 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.mQt 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5c0b86c4833653092865d313178fa2216b6cc346f0dca0b331c276607e5290b0 3 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5c0b86c4833653092865d313178fa2216b6cc346f0dca0b331c276607e5290b0 3 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5c0b86c4833653092865d313178fa2216b6cc346f0dca0b331c276607e5290b0 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.mQt 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.mQt 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.mQt 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 114684 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 114684 ']' 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.367 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mn3 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.AzG ]] 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AzG 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.aVT 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.D5y ]] 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.D5y 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.625 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.D0O 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qkP ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qkP 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VIz 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.9d7 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.9d7 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.mQt 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:38.882 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:38.883 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:38.883 16:01:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:39.815 Waiting for block devices as requested 00:24:39.815 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:39.815 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:40.132 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:40.132 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:40.132 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:40.132 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.389 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:40.389 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:40.389 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:24:40.645 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:40.645 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:40.646 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:40.903 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:40.903 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:40.903 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:40.903 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:41.160 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:41.417 16:01:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:41.675 No valid GPT data, bailing 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:24:41.675 00:24:41.675 Discovery Log Number of Records 2, Generation counter 2 00:24:41.675 =====Discovery Log Entry 0====== 00:24:41.675 trtype: tcp 00:24:41.675 adrfam: ipv4 00:24:41.675 subtype: current discovery subsystem 00:24:41.675 treq: not specified, sq flow control disable supported 00:24:41.675 portid: 1 00:24:41.675 trsvcid: 4420 00:24:41.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:41.675 traddr: 10.0.0.1 00:24:41.675 eflags: none 00:24:41.675 sectype: none 00:24:41.675 =====Discovery Log Entry 1====== 00:24:41.675 trtype: tcp 00:24:41.675 adrfam: ipv4 00:24:41.675 subtype: nvme subsystem 00:24:41.675 treq: not specified, sq flow control disable supported 00:24:41.675 portid: 1 00:24:41.675 trsvcid: 4420 00:24:41.675 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:41.675 traddr: 10.0.0.1 00:24:41.675 eflags: none 00:24:41.675 sectype: none 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.675 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.676 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.935 nvme0n1 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.935 nvme0n1 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.935 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.194 nvme0n1 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.194 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.452 16:01:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.452 nvme0n1 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:42.452 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.453 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.453 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.453 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.453 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.453 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.453 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.453 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.710 nvme0n1 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.710 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.711 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.968 nvme0n1 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.968 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.969 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.227 nvme0n1 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.227 16:01:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.485 nvme0n1 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.485 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.743 nvme0n1 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.743 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.001 nvme0n1 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.001 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.260 nvme0n1 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.260 16:01:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.517 nvme0n1 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:44.517 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.518 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 nvme0n1 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.775 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.034 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.293 nvme0n1 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.293 16:01:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.551 nvme0n1 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.551 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.809 nvme0n1 00:24:45.809 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.809 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.809 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.809 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.809 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.809 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.067 16:01:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.325 nvme0n1 00:24:46.325 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.325 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.325 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.325 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.325 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.325 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.583 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.148 nvme0n1 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.148 16:01:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.149 16:01:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.149 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.149 16:01:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.713 nvme0n1 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.713 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.714 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.971 nvme0n1 00:24:47.971 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.971 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.971 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.971 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.971 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.971 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.230 16:01:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.796 nvme0n1 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.796 16:01:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.729 nvme0n1 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.729 16:01:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.685 nvme0n1 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.685 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.686 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.250 nvme0n1 00:24:51.250 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.250 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.250 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.250 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.250 16:01:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.508 16:01:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.508 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.439 nvme0n1 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:52.439 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.440 16:01:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 nvme0n1 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 nvme0n1 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 16:01:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.371 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.372 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.372 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.372 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.372 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.628 nvme0n1 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.628 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 nvme0n1 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.886 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.887 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.887 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.887 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 nvme0n1 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.144 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.402 nvme0n1 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.402 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.403 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.403 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.660 nvme0n1 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.660 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.918 nvme0n1 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.918 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.176 nvme0n1 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.176 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.433 nvme0n1 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:55.433 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.434 16:01:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.691 nvme0n1 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.691 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.949 nvme0n1 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.949 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.207 nvme0n1 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.207 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.208 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.208 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.208 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.774 nvme0n1 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.774 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.032 nvme0n1 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:57.032 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.033 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.291 nvme0n1 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.291 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.856 nvme0n1 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.856 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.857 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.421 nvme0n1 00:24:58.421 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.421 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.421 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.421 16:01:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.421 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.421 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.986 nvme0n1 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.986 16:01:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.552 nvme0n1 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:59.552 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.553 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 nvme0n1 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.118 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.119 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.062 nvme0n1 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.063 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.000 nvme0n1 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.000 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.930 nvme0n1 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.930 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.897 nvme0n1 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.897 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.898 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 nvme0n1 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 nvme0n1 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.828 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.829 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.086 nvme0n1 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.086 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.087 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.344 nvme0n1 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.344 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.345 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 nvme0n1 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.602 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.603 nvme0n1 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.603 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.860 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 nvme0n1 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.861 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 nvme0n1 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.119 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.377 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.377 nvme0n1 00:25:06.377 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.377 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.377 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.377 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.377 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.377 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.634 nvme0n1 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.634 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.892 nvme0n1 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.892 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.150 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.407 nvme0n1 00:25:07.407 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.407 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.407 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.407 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.407 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.408 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.665 nvme0n1 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.665 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 nvme0n1 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.923 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.180 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.180 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.180 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.438 nvme0n1 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.438 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 nvme0n1 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.716 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.281 nvme0n1 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.281 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.282 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.848 nvme0n1 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.848 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.413 nvme0n1 00:25:10.413 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.413 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.413 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.413 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.414 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.414 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.979 nvme0n1 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.979 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.544 nvme0n1 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.544 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMyNjNlODA0YzBlOGY4ZWE0NGY2MTM4YWMwNzQxZmNdT38r: 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: ]] 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OWE2OTY0NzU0ZjM1Yzg3MTc4NDA4NjY1ODIzY2Q2ZTJiMDIyMjc1N2NhYjgyYzMyMmVkNjYxNDk3MjY2ZTQxMX+ZP5s=: 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.545 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.478 nvme0n1 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.478 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.411 nvme0n1 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGU5ODIyNzJkMDI0ZDZkMTg2MmYwYjk5MTM2Y2QxNTXG3EfM: 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzllY2E4Yzk4MzkzNzA4M2Q2MzQ2ZWJkNDg0NDljNDKy+L9T: 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.411 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.343 nvme0n1 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY5OGU5Mjc2NDdjYTU2YTRhODM0ZmY0ZGY3ZmYzNGIwYmY0M2Q0YWJhYWFjMTliszJ3OA==: 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGQyNWI1NTI2YWIyNWE1MzNlNjg4Y2NlZmQyMjU5YTa2jeU4: 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.343 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.907 nvme0n1 00:25:14.907 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.907 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.907 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.907 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.907 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NWMwYjg2YzQ4MzM2NTMwOTI4NjVkMzEzMTc4ZmEyMjE2YjZjYzM0NmYwZGNhMGIzMzFjMjc2NjA3ZTUyOTBiMMaqjAw=: 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.165 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.097 nvme0n1 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:16.097 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MTc1OTU4NGE4MTFhMDUwOGY1ZTdmYThiZDM0ZDUxY2Y3ODNmZGFkMDg2NGY4ODBjYwqYJA==: 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjZkMDdiYzE0YjU3ZjRkNmViOGFlM2E1MDYxZjg5ZjU4NmNmYzNiNDU0NGVkZWYy+Q189A==: 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.098 request: 00:25:16.098 { 00:25:16.098 "name": "nvme0", 00:25:16.098 "trtype": "tcp", 00:25:16.098 "traddr": "10.0.0.1", 00:25:16.098 "adrfam": "ipv4", 00:25:16.098 "trsvcid": "4420", 00:25:16.098 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:16.098 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:16.098 "prchk_reftag": false, 00:25:16.098 "prchk_guard": false, 00:25:16.098 "hdgst": false, 00:25:16.098 "ddgst": false, 00:25:16.098 "method": "bdev_nvme_attach_controller", 00:25:16.098 "req_id": 1 00:25:16.098 } 00:25:16.098 Got JSON-RPC error response 00:25:16.098 response: 00:25:16.098 { 00:25:16.098 "code": -5, 00:25:16.098 "message": "Input/output error" 00:25:16.098 } 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.098 request: 00:25:16.098 { 00:25:16.098 "name": "nvme0", 00:25:16.098 "trtype": "tcp", 00:25:16.098 "traddr": "10.0.0.1", 00:25:16.098 "adrfam": "ipv4", 00:25:16.098 "trsvcid": "4420", 00:25:16.098 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:16.098 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:16.098 "prchk_reftag": false, 00:25:16.098 "prchk_guard": false, 00:25:16.098 "hdgst": false, 00:25:16.098 "ddgst": false, 00:25:16.098 "dhchap_key": "key2", 00:25:16.098 "method": "bdev_nvme_attach_controller", 00:25:16.098 "req_id": 1 00:25:16.098 } 00:25:16.098 Got JSON-RPC error response 00:25:16.098 response: 00:25:16.098 { 00:25:16.098 "code": -5, 00:25:16.098 "message": "Input/output error" 00:25:16.098 } 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.098 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.356 request: 00:25:16.356 { 00:25:16.356 "name": "nvme0", 00:25:16.356 "trtype": "tcp", 00:25:16.356 "traddr": "10.0.0.1", 00:25:16.356 "adrfam": "ipv4", 00:25:16.356 "trsvcid": "4420", 00:25:16.356 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:16.356 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:16.356 "prchk_reftag": false, 00:25:16.356 "prchk_guard": false, 00:25:16.356 "hdgst": false, 00:25:16.356 "ddgst": false, 00:25:16.356 "dhchap_key": "key1", 00:25:16.356 "dhchap_ctrlr_key": "ckey2", 00:25:16.356 "method": "bdev_nvme_attach_controller", 00:25:16.356 "req_id": 1 00:25:16.356 } 00:25:16.356 Got JSON-RPC error response 00:25:16.356 response: 00:25:16.356 { 00:25:16.356 "code": -5, 00:25:16.356 "message": "Input/output error" 00:25:16.356 } 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:16.356 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:16.357 rmmod nvme_tcp 00:25:16.357 rmmod nvme_fabrics 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 114684 ']' 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 114684 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 114684 ']' 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 114684 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:16.357 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114684 00:25:16.357 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:16.357 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:16.357 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114684' 00:25:16.357 killing process with pid 114684 00:25:16.357 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 114684 00:25:16.357 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 114684 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.614 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:19.149 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:20.090 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:20.090 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:20.090 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:20.090 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:20.090 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:20.090 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:20.090 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:20.090 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:20.090 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:21.072 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:25:21.330 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Mn3 /tmp/spdk.key-null.aVT /tmp/spdk.key-sha256.D0O /tmp/spdk.key-sha384.VIz /tmp/spdk.key-sha512.mQt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:21.330 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:22.706 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:22.706 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:22.706 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:22.706 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:22.706 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:22.706 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:22.706 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:22.706 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:22.706 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:22.706 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:22.706 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:22.706 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:22.706 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:22.707 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:22.707 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:22.707 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:22.707 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:22.707 00:25:22.707 real 0m47.308s 00:25:22.707 user 0m44.451s 00:25:22.707 sys 0m5.903s 00:25:22.707 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:22.707 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.707 ************************************ 00:25:22.707 END TEST nvmf_auth_host 00:25:22.707 ************************************ 00:25:22.707 16:01:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:22.707 16:01:52 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:22.707 16:01:52 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:22.707 16:01:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:22.707 16:01:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:22.707 16:01:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:22.707 ************************************ 00:25:22.707 START TEST nvmf_digest 00:25:22.707 ************************************ 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:22.707 * Looking for test storage... 00:25:22.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:22.707 16:01:52 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:25.244 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:25.245 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:25.245 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:25.245 Found net devices under 0000:09:00.0: cvl_0_0 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:25.245 Found net devices under 0000:09:00.1: cvl_0_1 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:25.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:25:25.245 00:25:25.245 --- 10.0.0.2 ping statistics --- 00:25:25.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.245 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:25:25.245 00:25:25.245 --- 10.0.0.1 ping statistics --- 00:25:25.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.245 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:25.245 ************************************ 00:25:25.245 START TEST nvmf_digest_clean 00:25:25.245 ************************************ 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=123875 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 123875 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 123875 ']' 00:25:25.245 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:25.246 [2024-07-12 16:01:54.614761] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:25.246 [2024-07-12 16:01:54.614841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.246 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.246 [2024-07-12 16:01:54.678153] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.246 [2024-07-12 16:01:54.787359] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.246 [2024-07-12 16:01:54.787413] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.246 [2024-07-12 16:01:54.787426] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.246 [2024-07-12 16:01:54.787437] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.246 [2024-07-12 16:01:54.787447] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.246 [2024-07-12 16:01:54.787490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.246 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:25.246 null0 00:25:25.246 [2024-07-12 16:01:54.958911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.503 [2024-07-12 16:01:54.983108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.503 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.503 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:25.503 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:25.503 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=123901 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 123901 /var/tmp/bperf.sock 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 123901 ']' 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:25.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:25.504 16:01:54 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:25.504 [2024-07-12 16:01:55.027407] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:25.504 [2024-07-12 16:01:55.027471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123901 ] 00:25:25.504 EAL: No free 2048 kB hugepages reported on node 1 00:25:25.504 [2024-07-12 16:01:55.083614] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.504 [2024-07-12 16:01:55.189395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.761 16:01:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:25.761 16:01:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:25.761 16:01:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:25.761 16:01:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:25.761 16:01:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:26.018 16:01:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.018 16:01:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.276 nvme0n1 00:25:26.533 16:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:26.533 16:01:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:26.533 Running I/O for 2 seconds... 00:25:29.056 00:25:29.056 Latency(us) 00:25:29.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.056 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:29.056 nvme0n1 : 2.05 18100.91 70.71 0.00 0.00 6955.97 3446.71 45632.47 00:25:29.056 =================================================================================================================== 00:25:29.056 Total : 18100.91 70.71 0.00 0.00 6955.97 3446.71 45632.47 00:25:29.056 0 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:29.056 | select(.opcode=="crc32c") 00:25:29.056 | "\(.module_name) \(.executed)"' 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 123901 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 123901 ']' 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 123901 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123901 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123901' 00:25:29.056 killing process with pid 123901 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 123901 00:25:29.056 Received shutdown signal, test time was about 2.000000 seconds 00:25:29.056 00:25:29.056 Latency(us) 00:25:29.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.056 =================================================================================================================== 00:25:29.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 123901 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=124389 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 124389 /var/tmp/bperf.sock 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 124389 ']' 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:29.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.056 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:29.056 [2024-07-12 16:01:58.769031] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:29.056 [2024-07-12 16:01:58.769120] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124389 ] 00:25:29.056 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:29.056 Zero copy mechanism will not be used. 00:25:29.314 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.314 [2024-07-12 16:01:58.834353] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.314 [2024-07-12 16:01:58.946725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.314 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:29.314 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:29.314 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:29.314 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:29.314 16:01:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:29.878 16:01:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:29.878 16:01:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.135 nvme0n1 00:25:30.135 16:01:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:30.135 16:01:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:30.135 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:30.135 Zero copy mechanism will not be used. 00:25:30.135 Running I/O for 2 seconds... 00:25:32.765 00:25:32.765 Latency(us) 00:25:32.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.765 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:32.765 nvme0n1 : 2.00 3407.06 425.88 0.00 0.00 4691.57 4029.25 6796.33 00:25:32.765 =================================================================================================================== 00:25:32.765 Total : 3407.06 425.88 0.00 0.00 4691.57 4029.25 6796.33 00:25:32.765 0 00:25:32.765 16:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:32.765 16:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:32.765 16:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:32.765 16:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:32.765 16:02:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:32.765 | select(.opcode=="crc32c") 00:25:32.765 | "\(.module_name) \(.executed)"' 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 124389 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 124389 ']' 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 124389 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124389 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124389' 00:25:32.765 killing process with pid 124389 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 124389 00:25:32.765 Received shutdown signal, test time was about 2.000000 seconds 00:25:32.765 00:25:32.765 Latency(us) 00:25:32.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.765 =================================================================================================================== 00:25:32.765 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 124389 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=124835 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 124835 /var/tmp/bperf.sock 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 124835 ']' 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:32.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.765 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:32.765 [2024-07-12 16:02:02.483621] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:32.765 [2024-07-12 16:02:02.483712] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124835 ] 00:25:33.023 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.023 [2024-07-12 16:02:02.540902] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.023 [2024-07-12 16:02:02.642553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.023 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.023 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:33.023 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:33.023 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:33.023 16:02:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:33.587 16:02:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.587 16:02:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.844 nvme0n1 00:25:33.844 16:02:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:33.844 16:02:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:33.844 Running I/O for 2 seconds... 00:25:36.372 00:25:36.372 Latency(us) 00:25:36.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.372 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:36.372 nvme0n1 : 2.00 22135.69 86.47 0.00 0.00 5773.20 2487.94 16505.36 00:25:36.372 =================================================================================================================== 00:25:36.372 Total : 22135.69 86.47 0.00 0.00 5773.20 2487.94 16505.36 00:25:36.372 0 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:36.372 | select(.opcode=="crc32c") 00:25:36.372 | "\(.module_name) \(.executed)"' 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 124835 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 124835 ']' 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 124835 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124835 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124835' 00:25:36.372 killing process with pid 124835 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 124835 00:25:36.372 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.372 00:25:36.372 Latency(us) 00:25:36.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.372 =================================================================================================================== 00:25:36.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.372 16:02:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 124835 00:25:36.372 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:36.372 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:36.372 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:36.372 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:36.372 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:36.372 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:36.372 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=125245 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 125245 /var/tmp/bperf.sock 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 125245 ']' 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:36.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:36.630 [2024-07-12 16:02:06.144311] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:36.630 [2024-07-12 16:02:06.144426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125245 ] 00:25:36.630 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:36.630 Zero copy mechanism will not be used. 00:25:36.630 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.630 [2024-07-12 16:02:06.206241] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.630 [2024-07-12 16:02:06.309189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:36.630 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:37.195 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:37.195 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:37.452 nvme0n1 00:25:37.452 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:37.452 16:02:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:37.452 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:37.452 Zero copy mechanism will not be used. 00:25:37.452 Running I/O for 2 seconds... 00:25:40.005 00:25:40.005 Latency(us) 00:25:40.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.005 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:40.005 nvme0n1 : 2.01 2638.48 329.81 0.00 0.00 6051.16 4369.07 14563.56 00:25:40.005 =================================================================================================================== 00:25:40.005 Total : 2638.48 329.81 0.00 0.00 6051.16 4369.07 14563.56 00:25:40.005 0 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:40.005 | select(.opcode=="crc32c") 00:25:40.005 | "\(.module_name) \(.executed)"' 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 125245 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 125245 ']' 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 125245 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125245 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125245' 00:25:40.005 killing process with pid 125245 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 125245 00:25:40.005 Received shutdown signal, test time was about 2.000000 seconds 00:25:40.005 00:25:40.005 Latency(us) 00:25:40.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.005 =================================================================================================================== 00:25:40.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 125245 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 123875 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 123875 ']' 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 123875 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123875 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:40.005 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:40.006 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123875' 00:25:40.006 killing process with pid 123875 00:25:40.006 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 123875 00:25:40.006 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 123875 00:25:40.264 00:25:40.264 real 0m15.355s 00:25:40.264 user 0m30.845s 00:25:40.264 sys 0m3.907s 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:40.264 ************************************ 00:25:40.264 END TEST nvmf_digest_clean 00:25:40.264 ************************************ 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:40.264 ************************************ 00:25:40.264 START TEST nvmf_digest_error 00:25:40.264 ************************************ 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=125686 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 125686 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 125686 ']' 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.264 16:02:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.522 [2024-07-12 16:02:10.034992] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:40.522 [2024-07-12 16:02:10.035127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.522 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.522 [2024-07-12 16:02:10.103735] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.522 [2024-07-12 16:02:10.207635] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.522 [2024-07-12 16:02:10.207694] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.522 [2024-07-12 16:02:10.207718] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.522 [2024-07-12 16:02:10.207729] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.522 [2024-07-12 16:02:10.207738] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.522 [2024-07-12 16:02:10.207764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.522 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:40.522 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:40.522 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:40.522 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.522 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.780 [2024-07-12 16:02:10.264254] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.780 null0 00:25:40.780 [2024-07-12 16:02:10.377674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.780 [2024-07-12 16:02:10.401866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=125823 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 125823 /var/tmp/bperf.sock 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 125823 ']' 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:40.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:40.780 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.780 [2024-07-12 16:02:10.446079] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:40.780 [2024-07-12 16:02:10.446140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125823 ] 00:25:40.780 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.780 [2024-07-12 16:02:10.502047] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.038 [2024-07-12 16:02:10.607041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.038 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.038 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:41.038 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:41.038 16:02:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:41.295 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:41.296 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.296 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:41.296 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.296 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.296 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.860 nvme0n1 00:25:41.860 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:41.860 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.860 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:41.860 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.860 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:41.860 16:02:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:42.117 Running I/O for 2 seconds... 00:25:42.117 [2024-07-12 16:02:11.679019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.117 [2024-07-12 16:02:11.679064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.117 [2024-07-12 16:02:11.679082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.117 [2024-07-12 16:02:11.693855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.117 [2024-07-12 16:02:11.693886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.117 [2024-07-12 16:02:11.693902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.117 [2024-07-12 16:02:11.706641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.117 [2024-07-12 16:02:11.706673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.117 [2024-07-12 16:02:11.706701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.117 [2024-07-12 16:02:11.717917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.117 [2024-07-12 16:02:11.717949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.117 [2024-07-12 16:02:11.717981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.117 [2024-07-12 16:02:11.731972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.117 [2024-07-12 16:02:11.732001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.117 [2024-07-12 16:02:11.732018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.117 [2024-07-12 16:02:11.745445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.117 [2024-07-12 16:02:11.745478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.745496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.118 [2024-07-12 16:02:11.756754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.118 [2024-07-12 16:02:11.756783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.756798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.118 [2024-07-12 16:02:11.770836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.118 [2024-07-12 16:02:11.770866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.770882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.118 [2024-07-12 16:02:11.783417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.118 [2024-07-12 16:02:11.783449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.783467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.118 [2024-07-12 16:02:11.795346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.118 [2024-07-12 16:02:11.795374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.795391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.118 [2024-07-12 16:02:11.808134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.118 [2024-07-12 16:02:11.808166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.808183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.118 [2024-07-12 16:02:11.823872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.118 [2024-07-12 16:02:11.823923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.823942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.118 [2024-07-12 16:02:11.835960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.118 [2024-07-12 16:02:11.835991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.118 [2024-07-12 16:02:11.836009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.375 [2024-07-12 16:02:11.846861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.375 [2024-07-12 16:02:11.846891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.375 [2024-07-12 16:02:11.846908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.375 [2024-07-12 16:02:11.860097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.375 [2024-07-12 16:02:11.860127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.375 [2024-07-12 16:02:11.860160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.375 [2024-07-12 16:02:11.871903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.375 [2024-07-12 16:02:11.871935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.375 [2024-07-12 16:02:11.871952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.375 [2024-07-12 16:02:11.883854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.375 [2024-07-12 16:02:11.883885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.375 [2024-07-12 16:02:11.883902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.375 [2024-07-12 16:02:11.897338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.375 [2024-07-12 16:02:11.897369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.375 [2024-07-12 16:02:11.897387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.375 [2024-07-12 16:02:11.908770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.375 [2024-07-12 16:02:11.908801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.375 [2024-07-12 16:02:11.908818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.375 [2024-07-12 16:02:11.922447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:11.922485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:11.922502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:11.936321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:11.936364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:11.936380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:11.949760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:11.949792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:11.949808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:11.961547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:11.961576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:11.961592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:11.974867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:11.974899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:11.974917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:11.987546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:11.987575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:11.987592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.000555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.000588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.000606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.014379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.014408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.014424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.025345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.025374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.025390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.038292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.038329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.038368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.052495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.052540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.052557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.062493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.062522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.062537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.075789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.075821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.075838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.088530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.088559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.088575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.376 [2024-07-12 16:02:12.101183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.376 [2024-07-12 16:02:12.101214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.376 [2024-07-12 16:02:12.101246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.113853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.113885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.113902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.125198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.125229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.125246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.137835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.137879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.137896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.152716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.152751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.152769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.166121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.166152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.166170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.176876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.176904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.176920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.190063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.190093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.190109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.203547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.203578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.203595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.214860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.214892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.214909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.229067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.229099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.229116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.240937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.240969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.240987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.253641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.634 [2024-07-12 16:02:12.253673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.634 [2024-07-12 16:02:12.253696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.634 [2024-07-12 16:02:12.265871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.265903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.265920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.635 [2024-07-12 16:02:12.276927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.276958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.276975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.635 [2024-07-12 16:02:12.290398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.290426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.290442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.635 [2024-07-12 16:02:12.302425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.302453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.302469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.635 [2024-07-12 16:02:12.313266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.313295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.313348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.635 [2024-07-12 16:02:12.327160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.327191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.327223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.635 [2024-07-12 16:02:12.339696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.339726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.339743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.635 [2024-07-12 16:02:12.354240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.635 [2024-07-12 16:02:12.354272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.635 [2024-07-12 16:02:12.354289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.366989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.367022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.367038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.378026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.378055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.378072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.391147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.391194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.391211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.405211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.405241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.405258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.417995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.418027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.418044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.429899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.429943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.429960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.441799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.441845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.441863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.453982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.454010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.454025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.467202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.467248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.467265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.479372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.479403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.479419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.491873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.491901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.491917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.506436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.506467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.506484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.517391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.517419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.517435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.530924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.530955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.530972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.545590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.545622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.545640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.556932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.556978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.556996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.569518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.569565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.569582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.581600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.581632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.581669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.594235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.594266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.893 [2024-07-12 16:02:12.594283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.893 [2024-07-12 16:02:12.606705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.893 [2024-07-12 16:02:12.606737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.894 [2024-07-12 16:02:12.606755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.894 [2024-07-12 16:02:12.619341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:42.894 [2024-07-12 16:02:12.619378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.894 [2024-07-12 16:02:12.619395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.631087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.631118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.631134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.644172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.644203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.644219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.656867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.656898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.656915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.668539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.668567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.668583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.681680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.681725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.681742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.694616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.694669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.694688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.707505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.707536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.707554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.718183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.718214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.718230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.733049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.733079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.733094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.745746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.745778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.745796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.756866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.756895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.756911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.769612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.769642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.769660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.785540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.785572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.785590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.797359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.797389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.797405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.810076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.810108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.810124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.820820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.820849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.820865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.834288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.834326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.834345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.846807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.846854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.846871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.859557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.859588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.859605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.152 [2024-07-12 16:02:12.873094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.152 [2024-07-12 16:02:12.873140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.152 [2024-07-12 16:02:12.873157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.884901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.884940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.884956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.897358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.897390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.897408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.909233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.909264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.909302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.922454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.922483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.922499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.936323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.936350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.936366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.947904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.947936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.947953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.960955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.960985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.961002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.972652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.972683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.972700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.984562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.984592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.984609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:12.999675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:12.999703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:12.999719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:13.010690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:13.010719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:13.010736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:13.024726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:13.024759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:13.024776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:13.038889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:13.038921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:13.038938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:13.049422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.410 [2024-07-12 16:02:13.049451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.410 [2024-07-12 16:02:13.049467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.410 [2024-07-12 16:02:13.063469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.411 [2024-07-12 16:02:13.063498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.411 [2024-07-12 16:02:13.063514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.411 [2024-07-12 16:02:13.076500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.411 [2024-07-12 16:02:13.076546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.411 [2024-07-12 16:02:13.076563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.411 [2024-07-12 16:02:13.087650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.411 [2024-07-12 16:02:13.087681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.411 [2024-07-12 16:02:13.087698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.411 [2024-07-12 16:02:13.100376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.411 [2024-07-12 16:02:13.100407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.411 [2024-07-12 16:02:13.100424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.411 [2024-07-12 16:02:13.114304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.411 [2024-07-12 16:02:13.114356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.411 [2024-07-12 16:02:13.114373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.411 [2024-07-12 16:02:13.126897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.411 [2024-07-12 16:02:13.126927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.411 [2024-07-12 16:02:13.126951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.411 [2024-07-12 16:02:13.138295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.411 [2024-07-12 16:02:13.138333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.411 [2024-07-12 16:02:13.138352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.151565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.151611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.151628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.164431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.164462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.164480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.175394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.175424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.175440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.187730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.187757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.187773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.201070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.201100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.201116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.216432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.216463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.216479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.227639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.227669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.227685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.243572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.243607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.243624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.256007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.256039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.256056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.267479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.267510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.267527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.280514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.280545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.280563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.291437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.291469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.291486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.305274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.305305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.305331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.316080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.316108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.316123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.329125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.329154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.329170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.342856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.342885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.342902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.355457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.355488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.355505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.366765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.366796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.366813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.382849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.382883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.669 [2024-07-12 16:02:13.396050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.669 [2024-07-12 16:02:13.396082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.669 [2024-07-12 16:02:13.396098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.927 [2024-07-12 16:02:13.407516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.927 [2024-07-12 16:02:13.407547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.927 [2024-07-12 16:02:13.407564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.927 [2024-07-12 16:02:13.421832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.927 [2024-07-12 16:02:13.421863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.927 [2024-07-12 16:02:13.421881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.927 [2024-07-12 16:02:13.433006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.927 [2024-07-12 16:02:13.433037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.927 [2024-07-12 16:02:13.433055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.927 [2024-07-12 16:02:13.447659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.927 [2024-07-12 16:02:13.447690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.927 [2024-07-12 16:02:13.447708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.927 [2024-07-12 16:02:13.458596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.927 [2024-07-12 16:02:13.458626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.927 [2024-07-12 16:02:13.458650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.927 [2024-07-12 16:02:13.472111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.927 [2024-07-12 16:02:13.472142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.927 [2024-07-12 16:02:13.472160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.927 [2024-07-12 16:02:13.484681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.484713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.484729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.497287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.497326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.497346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.509945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.509976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.509993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.522010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.522041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.522058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.534460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.534491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.534508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.549426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.549457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.549475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.560339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.560391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.560407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.573927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.573964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.573982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.587743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.587775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.587793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.599363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.599394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.599411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.612401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.612429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.612445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.625132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.625163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.625180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.636971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.636998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.637013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.928 [2024-07-12 16:02:13.650710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:43.928 [2024-07-12 16:02:13.650740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.928 [2024-07-12 16:02:13.650757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.185 [2024-07-12 16:02:13.661661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc60f00) 00:25:44.185 [2024-07-12 16:02:13.661691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.186 [2024-07-12 16:02:13.661708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.186 00:25:44.186 Latency(us) 00:25:44.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.186 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:44.186 nvme0n1 : 2.01 20017.31 78.19 0.00 0.00 6385.90 3446.71 20583.16 00:25:44.186 =================================================================================================================== 00:25:44.186 Total : 20017.31 78.19 0.00 0.00 6385.90 3446.71 20583.16 00:25:44.186 0 00:25:44.186 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:44.186 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:44.186 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:44.186 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:44.186 | .driver_specific 00:25:44.186 | .nvme_error 00:25:44.186 | .status_code 00:25:44.186 | .command_transient_transport_error' 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 125823 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 125823 ']' 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 125823 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125823 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125823' 00:25:44.443 killing process with pid 125823 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 125823 00:25:44.443 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.443 00:25:44.443 Latency(us) 00:25:44.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.443 =================================================================================================================== 00:25:44.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.443 16:02:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 125823 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=126234 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 126234 /var/tmp/bperf.sock 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 126234 ']' 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:44.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:44.701 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:44.701 [2024-07-12 16:02:14.274667] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:44.701 [2024-07-12 16:02:14.274759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126234 ] 00:25:44.701 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:44.701 Zero copy mechanism will not be used. 00:25:44.701 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.701 [2024-07-12 16:02:14.332269] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.960 [2024-07-12 16:02:14.435573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.960 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:44.960 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:44.960 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:44.960 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:45.218 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:45.218 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.218 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.218 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.218 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.218 16:02:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:45.476 nvme0n1 00:25:45.476 16:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:45.476 16:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.476 16:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:45.476 16:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.476 16:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:45.476 16:02:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:45.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.734 Zero copy mechanism will not be used. 00:25:45.734 Running I/O for 2 seconds... 00:25:45.734 [2024-07-12 16:02:15.317590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.317646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.317666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.327056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.327090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.327116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.336321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.336368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.336386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.345508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.345540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.345557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.354673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.354703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.354721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.363978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.364008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.364025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.373148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.373178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.373195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.382376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.382407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.382424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.391569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.391600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.391617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.400870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.400900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.400917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.410049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.410085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.410103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.419194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.419224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.419242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.428357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.428387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.428404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.437497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.437543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.437561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.446957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.446987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.447004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.734 [2024-07-12 16:02:15.456845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.734 [2024-07-12 16:02:15.456877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.734 [2024-07-12 16:02:15.456895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.467246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.467278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.467295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.477851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.477884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.477901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.489694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.489738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.489755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.501534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.501565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.501583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.512911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.512942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.512959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.523575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.523607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.523624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.535159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.535190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.535208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.546866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.546897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.546914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.558363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.558394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.558411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.569586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.569618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.569635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.580000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.580031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.580048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.590526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.590572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.590595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.602474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.602507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.602524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.612831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.612862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.612879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.623941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.992 [2024-07-12 16:02:15.623972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.992 [2024-07-12 16:02:15.623989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.992 [2024-07-12 16:02:15.636142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.636175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.636193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.993 [2024-07-12 16:02:15.646539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.646571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.646588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.993 [2024-07-12 16:02:15.658446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.658492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.658510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.993 [2024-07-12 16:02:15.668758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.668790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.668807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:45.993 [2024-07-12 16:02:15.680234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.680265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.680283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:45.993 [2024-07-12 16:02:15.690938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.690969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.690987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:45.993 [2024-07-12 16:02:15.700914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.700946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.700964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.993 [2024-07-12 16:02:15.711211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:45.993 [2024-07-12 16:02:15.711256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.993 [2024-07-12 16:02:15.711273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.721465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.721497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.721515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.730830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.730860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.730876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.740551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.740582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.740599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.749676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.749705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.749722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.758820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.758849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.758866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.768065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.768094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.768116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.777384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.777413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.777430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.786654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.786698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.786714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.796078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.796109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.796126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.805489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.805519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.805536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.814924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.814952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.814969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.824446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.824490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.824507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.833849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.833877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.833893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.844012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.844042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.844059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.853487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.853522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.853540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.862752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.862782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.862799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.872302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.872340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.872372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.881742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.251 [2024-07-12 16:02:15.881770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.251 [2024-07-12 16:02:15.881786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.251 [2024-07-12 16:02:15.890983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.891012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.891027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.900612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.900656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.900673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.910071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.910112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.910129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.919485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.919516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.919533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.928685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.928716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.928733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.937931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.937959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.937975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.947124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.947154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.947172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.956310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.956362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.956379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.965544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.965574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.965590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.252 [2024-07-12 16:02:15.974863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.252 [2024-07-12 16:02:15.974891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.252 [2024-07-12 16:02:15.974907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:15.984170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:15.984200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:15.984216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:15.993565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:15.993609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:15.993626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.003298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.003350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.003368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.012907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.012937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.012959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.022633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.022662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.022679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.031954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.031982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.031998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.041127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.041172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.041189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.050969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.051014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.051031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.060142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.060172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.060189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.069438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.069468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.069485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.078747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.078776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.078792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.088056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.088100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.088117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.097752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.097782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.097799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.107385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.107446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.116736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.116779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.116795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.125973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.126017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.126033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.135426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.135455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.135472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.144735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.144779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.144795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.154042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.154070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.154086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.163382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.163427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.163443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.172686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.172729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.172751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.182084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.182127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.182143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.191654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.191683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.191700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.200973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.510 [2024-07-12 16:02:16.201017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.510 [2024-07-12 16:02:16.201033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.510 [2024-07-12 16:02:16.210276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.511 [2024-07-12 16:02:16.210327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.511 [2024-07-12 16:02:16.210346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.511 [2024-07-12 16:02:16.219654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.511 [2024-07-12 16:02:16.219697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.511 [2024-07-12 16:02:16.219713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.511 [2024-07-12 16:02:16.228975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.511 [2024-07-12 16:02:16.229004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.511 [2024-07-12 16:02:16.229020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.511 [2024-07-12 16:02:16.238292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.511 [2024-07-12 16:02:16.238330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.511 [2024-07-12 16:02:16.238348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.247653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.247682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.247697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.256882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.256916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.256933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.266224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.266252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.266268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.275555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.275597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.275613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.284955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.284998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.285014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.294153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.294181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.294197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.303414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.303443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.303459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.312854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.312883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.312899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.322119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.322161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.322177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.331616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.331644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.331660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.340941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.340970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.340986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.350135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.350165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.350181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.359429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.359459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.359475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.368772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.368801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.368817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.378014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.378043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.378058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.387288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.387324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.387356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.396542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.396571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.396587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.405986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.406030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.406046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.415418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.415447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.415469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.424748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.424778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.424795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.434214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.434242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.434258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.443477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.443507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.443524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.452669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.452700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.452731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.461820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.461849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.461865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.470988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.471019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.471035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.480188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.480218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.480235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:46.769 [2024-07-12 16:02:16.489384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:46.769 [2024-07-12 16:02:16.489413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:46.769 [2024-07-12 16:02:16.489430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.027 [2024-07-12 16:02:16.498554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.498600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.498617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.507767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.507798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.507816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.516922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.516953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.516969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.526075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.526105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.526121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.535182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.535212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.535229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.544321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.544366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.544385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.553455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.553486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.553504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.562574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.562604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.562637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.571723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.571767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.571791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.580913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.580942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.580958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.590278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.590333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.590367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.599412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.599442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.599459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.608655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.608699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.608716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.617902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.617945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.617961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.627088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.627118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.627134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.636164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.636194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.636227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.645333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.645378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.645395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.654420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.654457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.654475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.663533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.663563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.663580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.672786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.672815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.672832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.682053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.682082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.682099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.691260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.691289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.691332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.700405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.700436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.700453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.709481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.709511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.709528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.718545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.718575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.718593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.727740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.727770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.727787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.736882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.736911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.736928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.746153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.746182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.746198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.028 [2024-07-12 16:02:16.755360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.028 [2024-07-12 16:02:16.755391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.028 [2024-07-12 16:02:16.755408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.764440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.764471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.764488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.773651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.773681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.773699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.782969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.782998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.783015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.792357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.792401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.792418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.801549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.801579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.801596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.810692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.810726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.810743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.819869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.819898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.819915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.828887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.828917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.828933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.838194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.838223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.838240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.847391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.847422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.847440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.856468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.856500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.856517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.865590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.865635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.865652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.874807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.874837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.874853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.883945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.883974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.883991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.893167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.893196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.893213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.902364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.902409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.902426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.287 [2024-07-12 16:02:16.911716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.287 [2024-07-12 16:02:16.911760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.287 [2024-07-12 16:02:16.911777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.920845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.920874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.920890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.929892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.929922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.929938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.939090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.939119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.939135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.948434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.948465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.948482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.957460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.957491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.957509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.966578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.966624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.966646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.975817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.975847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.975863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.984932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.984977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.984994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:16.994074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:16.994119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:16.994136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:17.003206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:17.003234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:17.003250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.288 [2024-07-12 16:02:17.012302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.288 [2024-07-12 16:02:17.012341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.288 [2024-07-12 16:02:17.012368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.021752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.021782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.021798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.030985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.031029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.031046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.040151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.040179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.040196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.049284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.049341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.049367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.058471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.058501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.058519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.067607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.067652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.067669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.076734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.076763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.076779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.085972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.086001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.086017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.095423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.095453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.095469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.104832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.104863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.104880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.114100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.114144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.114161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.123415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.123454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.123471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.132603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.132637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.132653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.141742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.141771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.141788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.150922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.150951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.150969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.159996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.160025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.160042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.169059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.169088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.169104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.178206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.178236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.178267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.187384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.187414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.187431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.196578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.196608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.196640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.205734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.205783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.205802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.214931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.214960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.214977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.224211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.224254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.224270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.233568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.233598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.233629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.242742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.242772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.242788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.251861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.251891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.547 [2024-07-12 16:02:17.251908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.547 [2024-07-12 16:02:17.261075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.547 [2024-07-12 16:02:17.261105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.548 [2024-07-12 16:02:17.261121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.548 [2024-07-12 16:02:17.270343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.548 [2024-07-12 16:02:17.270387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.548 [2024-07-12 16:02:17.270404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.806 [2024-07-12 16:02:17.279361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.806 [2024-07-12 16:02:17.279391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.806 [2024-07-12 16:02:17.279409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.806 [2024-07-12 16:02:17.288776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.806 [2024-07-12 16:02:17.288805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.806 [2024-07-12 16:02:17.288822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.806 [2024-07-12 16:02:17.297913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.806 [2024-07-12 16:02:17.297957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.806 [2024-07-12 16:02:17.297974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.806 [2024-07-12 16:02:17.307139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.806 [2024-07-12 16:02:17.307169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.806 [2024-07-12 16:02:17.307186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.806 [2024-07-12 16:02:17.316154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14036b0) 00:25:47.806 [2024-07-12 16:02:17.316184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.806 [2024-07-12 16:02:17.316200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.806 00:25:47.806 Latency(us) 00:25:47.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.806 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:47.806 nvme0n1 : 2.01 3278.04 409.76 0.00 0.00 4874.64 4417.61 12233.39 00:25:47.806 =================================================================================================================== 00:25:47.806 Total : 3278.04 409.76 0.00 0.00 4874.64 4417.61 12233.39 00:25:47.806 0 00:25:47.806 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:47.806 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:47.806 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:47.806 | .driver_specific 00:25:47.806 | .nvme_error 00:25:47.806 | .status_code 00:25:47.806 | .command_transient_transport_error' 00:25:47.806 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 126234 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 126234 ']' 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 126234 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126234 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126234' 00:25:48.064 killing process with pid 126234 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 126234 00:25:48.064 Received shutdown signal, test time was about 2.000000 seconds 00:25:48.064 00:25:48.064 Latency(us) 00:25:48.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.064 =================================================================================================================== 00:25:48.064 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:48.064 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 126234 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=126638 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 126638 /var/tmp/bperf.sock 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 126638 ']' 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:48.321 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:48.321 [2024-07-12 16:02:17.926486] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:48.321 [2024-07-12 16:02:17.926576] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126638 ] 00:25:48.321 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.321 [2024-07-12 16:02:17.985828] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.579 [2024-07-12 16:02:18.091993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.579 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.579 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:48.579 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:48.579 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:48.836 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:48.836 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.836 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:48.836 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.836 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:48.836 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.402 nvme0n1 00:25:49.402 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:49.402 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.402 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:49.402 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.402 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:49.402 16:02:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.402 Running I/O for 2 seconds... 00:25:49.402 [2024-07-12 16:02:18.984216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ec408 00:25:49.402 [2024-07-12 16:02:18.985289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:18.985357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:18.996337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ed4e8 00:25:49.402 [2024-07-12 16:02:18.997453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:18.997485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.009757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ee5c8 00:25:49.402 [2024-07-12 16:02:19.011351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.011380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.021985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e6738 00:25:49.402 [2024-07-12 16:02:19.023762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.023791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.034216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190dece0 00:25:49.402 [2024-07-12 16:02:19.036161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.036190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.042704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ecc78 00:25:49.402 [2024-07-12 16:02:19.043601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.043629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.054614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e5658 00:25:49.402 [2024-07-12 16:02:19.055554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.055585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.066519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e3d08 00:25:49.402 [2024-07-12 16:02:19.067268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.067298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.078846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7da8 00:25:49.402 [2024-07-12 16:02:19.079931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.079959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.090885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f9b30 00:25:49.402 [2024-07-12 16:02:19.092048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.092076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.103152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e6fa8 00:25:49.402 [2024-07-12 16:02:19.104550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.104578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.115204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f4b08 00:25:49.402 [2024-07-12 16:02:19.116563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.116592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.402 [2024-07-12 16:02:19.127152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fa3a0 00:25:49.402 [2024-07-12 16:02:19.128614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.402 [2024-07-12 16:02:19.128643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.660 [2024-07-12 16:02:19.138481] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7538 00:25:49.660 [2024-07-12 16:02:19.139858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.660 [2024-07-12 16:02:19.139887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.660 [2024-07-12 16:02:19.149360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e49b0 00:25:49.660 [2024-07-12 16:02:19.150359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.660 [2024-07-12 16:02:19.150394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.660 [2024-07-12 16:02:19.161244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190efae0 00:25:49.661 [2024-07-12 16:02:19.162111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.162141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.173202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc560 00:25:49.661 [2024-07-12 16:02:19.174462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.174491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.185259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7100 00:25:49.661 [2024-07-12 16:02:19.186353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.186382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.196290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f0788 00:25:49.661 [2024-07-12 16:02:19.198354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.198384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.206603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190de038 00:25:49.661 [2024-07-12 16:02:19.207391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.207420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.218778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ec408 00:25:49.661 [2024-07-12 16:02:19.219795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.219823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.231887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fb8b8 00:25:49.661 [2024-07-12 16:02:19.233113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.233142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.244203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f2d80 00:25:49.661 [2024-07-12 16:02:19.245458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.245488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.255354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eff18 00:25:49.661 [2024-07-12 16:02:19.256624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.256652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.266212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f4f40 00:25:49.661 [2024-07-12 16:02:19.267118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.267146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.278149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e4140 00:25:49.661 [2024-07-12 16:02:19.278975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.279003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.290219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e1b48 00:25:49.661 [2024-07-12 16:02:19.291032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.291062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.303758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fd640 00:25:49.661 [2024-07-12 16:02:19.305237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.305265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.314486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ff3c8 00:25:49.661 [2024-07-12 16:02:19.315644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.315672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.327656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e99d8 00:25:49.661 [2024-07-12 16:02:19.329354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.329390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.338302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7970 00:25:49.661 [2024-07-12 16:02:19.339658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.339688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.349985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0630 00:25:49.661 [2024-07-12 16:02:19.351323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.351378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.362021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f35f0 00:25:49.661 [2024-07-12 16:02:19.363458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.363487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.373051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e9e10 00:25:49.661 [2024-07-12 16:02:19.374441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.374470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.661 [2024-07-12 16:02:19.383888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ed4e8 00:25:49.661 [2024-07-12 16:02:19.384872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.661 [2024-07-12 16:02:19.384901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.395954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190de470 00:25:49.919 [2024-07-12 16:02:19.396839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.396869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.408150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ef6a8 00:25:49.919 [2024-07-12 16:02:19.409163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.409193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.418975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e5a90 00:25:49.919 [2024-07-12 16:02:19.421032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.421061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.430173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.431074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.431118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.442076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.442927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.442970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.454072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.454900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.454949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.466131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.467031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.467074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.478118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.478962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.479006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.490019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.490869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.490913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.501886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.502721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.502749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.515105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0ea0 00:25:49.919 [2024-07-12 16:02:19.516582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.516610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:49.919 [2024-07-12 16:02:19.525966] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc128 00:25:49.919 [2024-07-12 16:02:19.527021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.919 [2024-07-12 16:02:19.527050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.539328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ec408 00:25:49.920 [2024-07-12 16:02:19.540859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.540887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.551408] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e99d8 00:25:49.920 [2024-07-12 16:02:19.553064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.553093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.562172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e23b8 00:25:49.920 [2024-07-12 16:02:19.563582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.563612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.573927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e9168 00:25:49.920 [2024-07-12 16:02:19.575236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.575264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.585879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e38d0 00:25:49.920 [2024-07-12 16:02:19.587265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.587293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.595658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ec840 00:25:49.920 [2024-07-12 16:02:19.596389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.596419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.607935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f81e0 00:25:49.920 [2024-07-12 16:02:19.608719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.608748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.619886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eee38 00:25:49.920 [2024-07-12 16:02:19.621091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.621119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.631815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e7c50 00:25:49.920 [2024-07-12 16:02:19.633038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.633067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:49.920 [2024-07-12 16:02:19.645187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fe720 00:25:49.920 [2024-07-12 16:02:19.647036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:49.920 [2024-07-12 16:02:19.647064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.657634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e1f80 00:25:50.178 [2024-07-12 16:02:19.659443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.659471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.665760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ef270 00:25:50.178 [2024-07-12 16:02:19.666521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.666549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.676856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e9168 00:25:50.178 [2024-07-12 16:02:19.677702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.677730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.689177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ddc00 00:25:50.178 [2024-07-12 16:02:19.690178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.690206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.701337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e4578 00:25:50.178 [2024-07-12 16:02:19.702457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.702485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.714457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e7c50 00:25:50.178 [2024-07-12 16:02:19.715825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.715855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.726522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f8e88 00:25:50.178 [2024-07-12 16:02:19.727971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.727999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.736166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f2510 00:25:50.178 [2024-07-12 16:02:19.737011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.737055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.747206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fa7d8 00:25:50.178 [2024-07-12 16:02:19.747984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.748011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.759363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e27f0 00:25:50.178 [2024-07-12 16:02:19.760254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.760288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.772470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7100 00:25:50.178 [2024-07-12 16:02:19.773691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.773719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.784283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e12d8 00:25:50.178 [2024-07-12 16:02:19.785462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.785491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.796446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e4de8 00:25:50.178 [2024-07-12 16:02:19.797690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.797718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.808352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eea00 00:25:50.178 [2024-07-12 16:02:19.809644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.809672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.820598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ff3c8 00:25:50.178 [2024-07-12 16:02:19.822013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.822042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.831674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e6fa8 00:25:50.178 [2024-07-12 16:02:19.832954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.832982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.842595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f2d80 00:25:50.178 [2024-07-12 16:02:19.843629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.843658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.855732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eaab8 00:25:50.178 [2024-07-12 16:02:19.857250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.857279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.866661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f0788 00:25:50.178 [2024-07-12 16:02:19.867895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.867939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.879647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e99d8 00:25:50.178 [2024-07-12 16:02:19.881274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.881322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.891761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e6738 00:25:50.178 [2024-07-12 16:02:19.893526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.893554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:50.178 [2024-07-12 16:02:19.899841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e7818 00:25:50.178 [2024-07-12 16:02:19.900620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.178 [2024-07-12 16:02:19.900650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.911677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ee5c8 00:25:50.436 [2024-07-12 16:02:19.912464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.912495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.923501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eb328 00:25:50.436 [2024-07-12 16:02:19.924149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.924178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.935565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f2510 00:25:50.436 [2024-07-12 16:02:19.936463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.936494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.949082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc998 00:25:50.436 [2024-07-12 16:02:19.950768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.950798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.959883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ed0b0 00:25:50.436 [2024-07-12 16:02:19.961266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.961295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.973159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e4de8 00:25:50.436 [2024-07-12 16:02:19.975049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.975078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.981449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fd208 00:25:50.436 [2024-07-12 16:02:19.982230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.982259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:19.994896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fb480 00:25:50.436 [2024-07-12 16:02:19.996247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:19.996276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:20.008016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f8618 00:25:50.436 [2024-07-12 16:02:20.009624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.436 [2024-07-12 16:02:20.009662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:50.436 [2024-07-12 16:02:20.019076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e88f8 00:25:50.436 [2024-07-12 16:02:20.020259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.020289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.030855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fef90 00:25:50.437 [2024-07-12 16:02:20.031948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.031981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.042565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc560 00:25:50.437 [2024-07-12 16:02:20.043660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.043691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.056246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7538 00:25:50.437 [2024-07-12 16:02:20.058017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.058047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.068559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fac10 00:25:50.437 [2024-07-12 16:02:20.070368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.070406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.076971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ec408 00:25:50.437 [2024-07-12 16:02:20.077782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.077810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.089161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e99d8 00:25:50.437 [2024-07-12 16:02:20.090054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.090098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.101156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7100 00:25:50.437 [2024-07-12 16:02:20.101958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.101986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.113136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc560 00:25:50.437 [2024-07-12 16:02:20.114105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.114133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.125164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f2948 00:25:50.437 [2024-07-12 16:02:20.126208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.126237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.136067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e23b8 00:25:50.437 [2024-07-12 16:02:20.136949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.136977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.149135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eb328 00:25:50.437 [2024-07-12 16:02:20.150301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.150337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:50.437 [2024-07-12 16:02:20.160067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eee38 00:25:50.437 [2024-07-12 16:02:20.161118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.437 [2024-07-12 16:02:20.161147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.172188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e6fa8 00:25:50.695 [2024-07-12 16:02:20.173453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.173481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.184233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e9168 00:25:50.695 [2024-07-12 16:02:20.185616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.185644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.196264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0630 00:25:50.695 [2024-07-12 16:02:20.197842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.197871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.207188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f20d8 00:25:50.695 [2024-07-12 16:02:20.208327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.208371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.218847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e5220 00:25:50.695 [2024-07-12 16:02:20.219955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.219998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.230668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eaef0 00:25:50.695 [2024-07-12 16:02:20.231831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.231875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.242698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f9f68 00:25:50.695 [2024-07-12 16:02:20.243636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.243665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.256403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e2c28 00:25:50.695 [2024-07-12 16:02:20.258217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.258245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.264723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f0788 00:25:50.695 [2024-07-12 16:02:20.265460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.265489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.275908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc560 00:25:50.695 [2024-07-12 16:02:20.276666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.276695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.288219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e1f80 00:25:50.695 [2024-07-12 16:02:20.289171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.289199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.300595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190edd58 00:25:50.695 [2024-07-12 16:02:20.301707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.301734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.313748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fda78 00:25:50.695 [2024-07-12 16:02:20.315076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.315105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.325961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f4298 00:25:50.695 [2024-07-12 16:02:20.327296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.327331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.336957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fb480 00:25:50.695 [2024-07-12 16:02:20.338283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.338311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.349123] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f8a50 00:25:50.695 [2024-07-12 16:02:20.350583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.695 [2024-07-12 16:02:20.350611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:50.695 [2024-07-12 16:02:20.361341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e49b0 00:25:50.695 [2024-07-12 16:02:20.363061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.696 [2024-07-12 16:02:20.363089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.696 [2024-07-12 16:02:20.372220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fe2e8 00:25:50.696 [2024-07-12 16:02:20.373514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.696 [2024-07-12 16:02:20.373547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:50.696 [2024-07-12 16:02:20.383927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f6890 00:25:50.696 [2024-07-12 16:02:20.385268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.696 [2024-07-12 16:02:20.385312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:50.696 [2024-07-12 16:02:20.395026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eb328 00:25:50.696 [2024-07-12 16:02:20.396221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.696 [2024-07-12 16:02:20.396248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:50.696 [2024-07-12 16:02:20.407099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f6cc8 00:25:50.696 [2024-07-12 16:02:20.408446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.696 [2024-07-12 16:02:20.408475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:50.696 [2024-07-12 16:02:20.419158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f6020 00:25:50.696 [2024-07-12 16:02:20.420796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.696 [2024-07-12 16:02:20.420825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.430302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e27f0 00:25:50.954 [2024-07-12 16:02:20.431458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.431487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.442049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190dece0 00:25:50.954 [2024-07-12 16:02:20.443233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.443261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.453933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fdeb0 00:25:50.954 [2024-07-12 16:02:20.455111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.455140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.464756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ee190 00:25:50.954 [2024-07-12 16:02:20.465868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.465896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.478034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f9f68 00:25:50.954 [2024-07-12 16:02:20.479311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.479366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.490094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fdeb0 00:25:50.954 [2024-07-12 16:02:20.491458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.491488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.501085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ef6a8 00:25:50.954 [2024-07-12 16:02:20.502426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.502455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.511884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f4298 00:25:50.954 [2024-07-12 16:02:20.512901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.512929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.523754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc560 00:25:50.954 [2024-07-12 16:02:20.524615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.524644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.535908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f8618 00:25:50.954 [2024-07-12 16:02:20.537062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.537090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.548006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f20d8 00:25:50.954 [2024-07-12 16:02:20.549161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.954 [2024-07-12 16:02:20.549189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:50.954 [2024-07-12 16:02:20.559007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190feb58 00:25:50.954 [2024-07-12 16:02:20.560221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.560248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.571866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7100 00:25:50.955 [2024-07-12 16:02:20.573284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.573333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.583849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eb760 00:25:50.955 [2024-07-12 16:02:20.585368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.585396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.593328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f0788 00:25:50.955 [2024-07-12 16:02:20.594305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.594358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.604320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190dfdc0 00:25:50.955 [2024-07-12 16:02:20.605162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.605189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.617332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e3498 00:25:50.955 [2024-07-12 16:02:20.618447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.618476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.628251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e23b8 00:25:50.955 [2024-07-12 16:02:20.629250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.629277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.641140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ea680 00:25:50.955 [2024-07-12 16:02:20.642469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.642498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.652991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ed4e8 00:25:50.955 [2024-07-12 16:02:20.654260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.654303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.664915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f8a50 00:25:50.955 [2024-07-12 16:02:20.666171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.666199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:50.955 [2024-07-12 16:02:20.677188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e1710 00:25:50.955 [2024-07-12 16:02:20.678519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.955 [2024-07-12 16:02:20.678548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.688226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e2c28 00:25:51.213 [2024-07-12 16:02:20.689538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.689566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.699171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f1430 00:25:51.213 [2024-07-12 16:02:20.700094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.700122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.712196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e0a68 00:25:51.213 [2024-07-12 16:02:20.713696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.713740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.723213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f5be8 00:25:51.213 [2024-07-12 16:02:20.724265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.724309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.734897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f2948 00:25:51.213 [2024-07-12 16:02:20.736048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.736076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.746777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e5a90 00:25:51.213 [2024-07-12 16:02:20.747798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.747828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.758853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f3a28 00:25:51.213 [2024-07-12 16:02:20.760106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.760148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.772201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e1710 00:25:51.213 [2024-07-12 16:02:20.773996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.774023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.783050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e7c50 00:25:51.213 [2024-07-12 16:02:20.784582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.784631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.793739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e3498 00:25:51.213 [2024-07-12 16:02:20.795031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.795058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.804822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fb8b8 00:25:51.213 [2024-07-12 16:02:20.805789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.805817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.817035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fa3a0 00:25:51.213 [2024-07-12 16:02:20.818123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.818151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.828135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fef90 00:25:51.213 [2024-07-12 16:02:20.829193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.829221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.841238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7538 00:25:51.213 [2024-07-12 16:02:20.842532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.842561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.853296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190e3060 00:25:51.213 [2024-07-12 16:02:20.854625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.854653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.864244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f7da8 00:25:51.213 [2024-07-12 16:02:20.865592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.865619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.874990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ed0b0 00:25:51.213 [2024-07-12 16:02:20.875912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.875957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.885716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fc128 00:25:51.213 [2024-07-12 16:02:20.886548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.886576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.898723] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fac10 00:25:51.213 [2024-07-12 16:02:20.899851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.899879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.910900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f20d8 00:25:51.213 [2024-07-12 16:02:20.912132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.912160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.922901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ef6a8 00:25:51.213 [2024-07-12 16:02:20.924164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.924192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:51.213 [2024-07-12 16:02:20.934664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190f0bc0 00:25:51.213 [2024-07-12 16:02:20.935972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.213 [2024-07-12 16:02:20.935999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:51.471 [2024-07-12 16:02:20.945757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190eb760 00:25:51.471 [2024-07-12 16:02:20.946919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.471 [2024-07-12 16:02:20.946954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:51.471 [2024-07-12 16:02:20.958784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190fb8b8 00:25:51.471 [2024-07-12 16:02:20.960193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.471 [2024-07-12 16:02:20.960221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:51.471 [2024-07-12 16:02:20.970927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x102a640) with pdu=0x2000190ed920 00:25:51.471 [2024-07-12 16:02:20.972463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:51.471 [2024-07-12 16:02:20.972493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:51.471 00:25:51.471 Latency(us) 00:25:51.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.471 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:51.471 nvme0n1 : 2.00 21703.24 84.78 0.00 0.00 5888.18 2439.40 14757.74 00:25:51.471 =================================================================================================================== 00:25:51.471 Total : 21703.24 84.78 0.00 0.00 5888.18 2439.40 14757.74 00:25:51.471 0 00:25:51.471 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:51.471 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:51.471 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:51.471 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:51.471 | .driver_specific 00:25:51.471 | .nvme_error 00:25:51.471 | .status_code 00:25:51.471 | .command_transient_transport_error' 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 126638 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 126638 ']' 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 126638 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126638 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126638' 00:25:51.728 killing process with pid 126638 00:25:51.728 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 126638 00:25:51.728 Received shutdown signal, test time was about 2.000000 seconds 00:25:51.728 00:25:51.728 Latency(us) 00:25:51.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.729 =================================================================================================================== 00:25:51.729 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:51.729 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 126638 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=127165 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 127165 /var/tmp/bperf.sock 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 127165 ']' 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.044 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.044 [2024-07-12 16:02:21.570851] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:25:52.044 [2024-07-12 16:02:21.570944] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127165 ] 00:25:52.044 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.044 Zero copy mechanism will not be used. 00:25:52.044 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.044 [2024-07-12 16:02:21.629885] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.044 [2024-07-12 16:02:21.736883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.328 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.328 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:52.328 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.328 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.585 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:52.585 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.585 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.585 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.585 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.585 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.843 nvme0n1 00:25:52.843 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:52.843 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.843 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.843 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.843 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:52.843 16:02:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.843 Zero copy mechanism will not be used. 00:25:52.843 Running I/O for 2 seconds... 00:25:52.843 [2024-07-12 16:02:22.562719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:52.843 [2024-07-12 16:02:22.563107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.843 [2024-07-12 16:02:22.563158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.575792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.576166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.576206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.589872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.590248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.590278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.604033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.604413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.604445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.618425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.618807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.618836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.632901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.633255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.633285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.647177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.647554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.647610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.661077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.661432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.661475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.674102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.674412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.674455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.686989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.687366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.687421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.101 [2024-07-12 16:02:22.699283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.101 [2024-07-12 16:02:22.699707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.101 [2024-07-12 16:02:22.699737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.713051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.713412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.713456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.726768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.727010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.727038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.739873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.740089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.740119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.753882] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.754234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.754264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.767236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.767610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.767662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.780213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.780500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.780543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.794115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.794396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.794427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.807433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.807806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.807851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.102 [2024-07-12 16:02:22.822341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.102 [2024-07-12 16:02:22.822739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.102 [2024-07-12 16:02:22.822768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.835734] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.836067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.836099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.849332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.849677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.849707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.862506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.862867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.862897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.876567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.876919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.876966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.890804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.891128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.891158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.903883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.904233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.904279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.917217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.917414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.917442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.930096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.930590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.930643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.942545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.943049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.359 [2024-07-12 16:02:22.943080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.359 [2024-07-12 16:02:22.954710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.359 [2024-07-12 16:02:22.955145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:22.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:22.966291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:22.966810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:22.966841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:22.979387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:22.979836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:22.979866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:22.991350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:22.991896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:22.991926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:23.004049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:23.004540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:23.004571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:23.017336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:23.017782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:23.017812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:23.029433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:23.029944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:23.029973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:23.041974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:23.042363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:23.042393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:23.054636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:23.055134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:23.055165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:23.066906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:23.067436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:23.067467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.360 [2024-07-12 16:02:23.079664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.360 [2024-07-12 16:02:23.080081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.360 [2024-07-12 16:02:23.080127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.616 [2024-07-12 16:02:23.093035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.616 [2024-07-12 16:02:23.093529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.616 [2024-07-12 16:02:23.093576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.616 [2024-07-12 16:02:23.106101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.616 [2024-07-12 16:02:23.106604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.616 [2024-07-12 16:02:23.106634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.616 [2024-07-12 16:02:23.118752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.616 [2024-07-12 16:02:23.119130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.616 [2024-07-12 16:02:23.119162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.616 [2024-07-12 16:02:23.130847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.616 [2024-07-12 16:02:23.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.616 [2024-07-12 16:02:23.131430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.616 [2024-07-12 16:02:23.143351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.616 [2024-07-12 16:02:23.143903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.616 [2024-07-12 16:02:23.143933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.616 [2024-07-12 16:02:23.154937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.616 [2024-07-12 16:02:23.155456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.616 [2024-07-12 16:02:23.155487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.616 [2024-07-12 16:02:23.168461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.168883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.168913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.180677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.181135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.181165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.193199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.193590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.193622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.206554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.207058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.207089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.219620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.220077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.220107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.232466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.232943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.232973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.245185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.245734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.245765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.257600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.257985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.258022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.269774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.270158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.270190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.282305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.282674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.282720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.294594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.294968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.294999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.306246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.306664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.306695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.318448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.318879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.318910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.331541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.332003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.332032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.617 [2024-07-12 16:02:23.344396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.617 [2024-07-12 16:02:23.344879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.617 [2024-07-12 16:02:23.344909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.873 [2024-07-12 16:02:23.356219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.873 [2024-07-12 16:02:23.356653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-07-12 16:02:23.356683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.873 [2024-07-12 16:02:23.369071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.873 [2024-07-12 16:02:23.369485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-07-12 16:02:23.369516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.873 [2024-07-12 16:02:23.381388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.873 [2024-07-12 16:02:23.381815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.873 [2024-07-12 16:02:23.381845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.394248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.394746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.394776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.407705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.408132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.408162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.419122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.419534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.419564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.431279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.431671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.431701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.443576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.444051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.444082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.456917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.457382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.457425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.469942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.470433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.470479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.482946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.483449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.483479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.495544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.495973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.496002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.507806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.508254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.508285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.520680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.521107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.521137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.533242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.533727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.533757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.546368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.546735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.546765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.558462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.558898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.558929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.570748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.571132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.571176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.582847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.583170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.583221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.874 [2024-07-12 16:02:23.594954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:53.874 [2024-07-12 16:02:23.595481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.874 [2024-07-12 16:02:23.595513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.608139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.608657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.608687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.620564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.620985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.621015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.632749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.633243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.633273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.646788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.647228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.647258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.659054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.659478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.659507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.671432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.671906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.671936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.684148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.684576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.684606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.696963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.132 [2024-07-12 16:02:23.697471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.132 [2024-07-12 16:02:23.697500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.132 [2024-07-12 16:02:23.709452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.709918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.709947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.721984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.722474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.722505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.734910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.735358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.735389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.747521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.747897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.747926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.759762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.760161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.760206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.772628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.773068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.773098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.783829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.784273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.784303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.796853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.797456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.797516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.811376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.811796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.811829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.821973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.822469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.822504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.834427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.834861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.834892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.846162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.846556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.846605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.133 [2024-07-12 16:02:23.857692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.133 [2024-07-12 16:02:23.858050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.133 [2024-07-12 16:02:23.858098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.391 [2024-07-12 16:02:23.869772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.391 [2024-07-12 16:02:23.870224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.391 [2024-07-12 16:02:23.870258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.391 [2024-07-12 16:02:23.881783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.391 [2024-07-12 16:02:23.882250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.391 [2024-07-12 16:02:23.882282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.391 [2024-07-12 16:02:23.893775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.894296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.894347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.906726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.907195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.907234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.919538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.920069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.920111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.931749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.932209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.932240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.944656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.945052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.945098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.956814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.957271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.957302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.969915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.970331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.970383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.982845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.983248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.983279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:23.994729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:23.995233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:23.995264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.007520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.007895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.007927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.020180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.020656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.020704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.032817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.033335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.033367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.045050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.045494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.045526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.057791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.058137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.058169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.071214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.071709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.071739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.083649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.084065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.084124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.095729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.096115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.096147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.107757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.108332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.108364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.392 [2024-07-12 16:02:24.119430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.392 [2024-07-12 16:02:24.119906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.392 [2024-07-12 16:02:24.119949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.132254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.132734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.132769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.143815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.144248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.144279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.155285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.155668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.155701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.166420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.166749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.166788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.178929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.179309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.179368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.190828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.191268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.191321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.202970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.203332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.214702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.215100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.215133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.227358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.227870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.227901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.239261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.239665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.239695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.251621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.251941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.251975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.263381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.263831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.263878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.275787] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.276178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.276225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.287982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.288427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.288461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.300573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.300964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.300996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.312832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.313325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.313375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.325923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.650 [2024-07-12 16:02:24.326392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.650 [2024-07-12 16:02:24.326438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.650 [2024-07-12 16:02:24.338447] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.651 [2024-07-12 16:02:24.338844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.651 [2024-07-12 16:02:24.338887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.651 [2024-07-12 16:02:24.350726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.651 [2024-07-12 16:02:24.351091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.651 [2024-07-12 16:02:24.351129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.651 [2024-07-12 16:02:24.363699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.651 [2024-07-12 16:02:24.364153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.651 [2024-07-12 16:02:24.364187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.651 [2024-07-12 16:02:24.376861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.651 [2024-07-12 16:02:24.377343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.651 [2024-07-12 16:02:24.377378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.389091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.389474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.389518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.401856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.402356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.402391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.414514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.414948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.414979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.426890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.427304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.439536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.439963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.440031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.451402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.451715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.451760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.463080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.463523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.463555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.475948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.476313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.476358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.488531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.488991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.489040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.501075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.501457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.501504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.513627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.514085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.514117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.526065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.526450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.526512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.539142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.909 [2024-07-12 16:02:24.539581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.909 [2024-07-12 16:02:24.539615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:54.909 [2024-07-12 16:02:24.551285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xe5fc90) with pdu=0x2000190fef90 00:25:54.910 [2024-07-12 16:02:24.551735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.910 [2024-07-12 16:02:24.551790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:54.910 00:25:54.910 Latency(us) 00:25:54.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.910 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:54.910 nvme0n1 : 2.01 2444.98 305.62 0.00 0.00 6526.82 4660.34 15728.64 00:25:54.910 =================================================================================================================== 00:25:54.910 Total : 2444.98 305.62 0.00 0.00 6526.82 4660.34 15728.64 00:25:54.910 0 00:25:54.910 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:54.910 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:54.910 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:54.910 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:54.910 | .driver_specific 00:25:54.910 | .nvme_error 00:25:54.910 | .status_code 00:25:54.910 | .command_transient_transport_error' 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 127165 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 127165 ']' 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 127165 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127165 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127165' 00:25:55.168 killing process with pid 127165 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 127165 00:25:55.168 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.168 00:25:55.168 Latency(us) 00:25:55.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.168 =================================================================================================================== 00:25:55.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.168 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 127165 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 125686 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 125686 ']' 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 125686 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125686 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125686' 00:25:55.426 killing process with pid 125686 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 125686 00:25:55.426 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 125686 00:25:55.683 00:25:55.683 real 0m15.435s 00:25:55.683 user 0m30.559s 00:25:55.683 sys 0m4.079s 00:25:55.683 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:55.683 16:02:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.683 ************************************ 00:25:55.683 END TEST nvmf_digest_error 00:25:55.683 ************************************ 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:55.943 rmmod nvme_tcp 00:25:55.943 rmmod nvme_fabrics 00:25:55.943 rmmod nvme_keyring 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 125686 ']' 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 125686 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 125686 ']' 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 125686 00:25:55.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (125686) - No such process 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 125686 is not found' 00:25:55.943 Process with pid 125686 is not found 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.943 16:02:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.844 16:02:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:57.844 00:25:57.844 real 0m35.256s 00:25:57.844 user 1m2.266s 00:25:57.844 sys 0m9.584s 00:25:57.844 16:02:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:57.844 16:02:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:57.844 ************************************ 00:25:57.844 END TEST nvmf_digest 00:25:57.844 ************************************ 00:25:57.844 16:02:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:57.844 16:02:27 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:57.844 16:02:27 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:57.844 16:02:27 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:57.844 16:02:27 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:57.844 16:02:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:58.102 16:02:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:58.102 16:02:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:58.102 ************************************ 00:25:58.102 START TEST nvmf_bdevperf 00:25:58.102 ************************************ 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:58.102 * Looking for test storage... 00:25:58.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:58.102 16:02:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:00.626 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:00.626 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:00.626 Found net devices under 0000:09:00.0: cvl_0_0 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:00.626 Found net devices under 0000:09:00.1: cvl_0_1 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:00.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:26:00.626 00:26:00.626 --- 10.0.0.2 ping statistics --- 00:26:00.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.626 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:26:00.626 00:26:00.626 --- 10.0.0.1 ping statistics --- 00:26:00.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.626 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=129521 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 129521 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 129521 ']' 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:00.626 16:02:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.626 [2024-07-12 16:02:30.003971] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:26:00.627 [2024-07-12 16:02:30.004060] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.627 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.627 [2024-07-12 16:02:30.089787] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:00.627 [2024-07-12 16:02:30.204565] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.627 [2024-07-12 16:02:30.204627] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.627 [2024-07-12 16:02:30.204642] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.627 [2024-07-12 16:02:30.204653] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.627 [2024-07-12 16:02:30.204663] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.627 [2024-07-12 16:02:30.204754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.627 [2024-07-12 16:02:30.204820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.627 [2024-07-12 16:02:30.204823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.627 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.884 [2024-07-12 16:02:30.356258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.884 Malloc0 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.884 [2024-07-12 16:02:30.423396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.884 { 00:26:00.884 "params": { 00:26:00.884 "name": "Nvme$subsystem", 00:26:00.884 "trtype": "$TEST_TRANSPORT", 00:26:00.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.884 "adrfam": "ipv4", 00:26:00.884 "trsvcid": "$NVMF_PORT", 00:26:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.884 "hdgst": ${hdgst:-false}, 00:26:00.884 "ddgst": ${ddgst:-false} 00:26:00.884 }, 00:26:00.884 "method": "bdev_nvme_attach_controller" 00:26:00.884 } 00:26:00.884 EOF 00:26:00.884 )") 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:00.884 16:02:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:00.884 "params": { 00:26:00.884 "name": "Nvme1", 00:26:00.884 "trtype": "tcp", 00:26:00.884 "traddr": "10.0.0.2", 00:26:00.884 "adrfam": "ipv4", 00:26:00.884 "trsvcid": "4420", 00:26:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:00.884 "hdgst": false, 00:26:00.884 "ddgst": false 00:26:00.884 }, 00:26:00.884 "method": "bdev_nvme_attach_controller" 00:26:00.884 }' 00:26:00.884 [2024-07-12 16:02:30.473233] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:26:00.884 [2024-07-12 16:02:30.473330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129552 ] 00:26:00.884 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.884 [2024-07-12 16:02:30.535368] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.141 [2024-07-12 16:02:30.648013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.398 Running I/O for 1 seconds... 00:26:02.329 00:26:02.329 Latency(us) 00:26:02.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.329 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:02.329 Verification LBA range: start 0x0 length 0x4000 00:26:02.329 Nvme1n1 : 1.04 7684.00 30.02 0.00 0.00 15961.81 2852.03 44661.57 00:26:02.329 =================================================================================================================== 00:26:02.329 Total : 7684.00 30.02 0.00 0.00 15961.81 2852.03 44661.57 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=129814 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:02.586 { 00:26:02.586 "params": { 00:26:02.586 "name": "Nvme$subsystem", 00:26:02.586 "trtype": "$TEST_TRANSPORT", 00:26:02.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:02.586 "adrfam": "ipv4", 00:26:02.586 "trsvcid": "$NVMF_PORT", 00:26:02.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:02.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:02.586 "hdgst": ${hdgst:-false}, 00:26:02.586 "ddgst": ${ddgst:-false} 00:26:02.586 }, 00:26:02.586 "method": "bdev_nvme_attach_controller" 00:26:02.586 } 00:26:02.586 EOF 00:26:02.586 )") 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:02.586 16:02:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:02.586 "params": { 00:26:02.586 "name": "Nvme1", 00:26:02.586 "trtype": "tcp", 00:26:02.586 "traddr": "10.0.0.2", 00:26:02.586 "adrfam": "ipv4", 00:26:02.586 "trsvcid": "4420", 00:26:02.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:02.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:02.586 "hdgst": false, 00:26:02.586 "ddgst": false 00:26:02.586 }, 00:26:02.586 "method": "bdev_nvme_attach_controller" 00:26:02.586 }' 00:26:02.586 [2024-07-12 16:02:32.298997] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:26:02.586 [2024-07-12 16:02:32.299075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129814 ] 00:26:02.844 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.844 [2024-07-12 16:02:32.358818] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.844 [2024-07-12 16:02:32.470550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.100 Running I/O for 15 seconds... 00:26:05.627 16:02:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 129521 00:26:05.627 16:02:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:05.627 [2024-07-12 16:02:35.269544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.269976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.269992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.270006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.270022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.270036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.270051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.270064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.270080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.270094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.627 [2024-07-12 16:02:35.270109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.627 [2024-07-12 16:02:35.270121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.270981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.270994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.628 [2024-07-12 16:02:35.271345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.628 [2024-07-12 16:02:35.271360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.271973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.271987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.629 [2024-07-12 16:02:35.272587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.629 [2024-07-12 16:02:35.272601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.272980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.272992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.273017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.273043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.273068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.630 [2024-07-12 16:02:35.273096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.630 [2024-07-12 16:02:35.273122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.630 [2024-07-12 16:02:35.273147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.630 [2024-07-12 16:02:35.273172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.630 [2024-07-12 16:02:35.273198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.630 [2024-07-12 16:02:35.273223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:05.630 [2024-07-12 16:02:35.273247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.273272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.273312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.630 [2024-07-12 16:02:35.273365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2178c80 is same with the state(5) to be set 00:26:05.630 [2024-07-12 16:02:35.273398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:05.630 [2024-07-12 16:02:35.273410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:05.630 [2024-07-12 16:02:35.273421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40560 len:8 PRP1 0x0 PRP2 0x0 00:26:05.630 [2024-07-12 16:02:35.273433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.630 [2024-07-12 16:02:35.273494] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2178c80 was disconnected and freed. reset controller. 00:26:05.630 [2024-07-12 16:02:35.276886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.630 [2024-07-12 16:02:35.276962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.630 [2024-07-12 16:02:35.277665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.630 [2024-07-12 16:02:35.277694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.630 [2024-07-12 16:02:35.277710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.630 [2024-07-12 16:02:35.277957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.630 [2024-07-12 16:02:35.278150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.630 [2024-07-12 16:02:35.278168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.630 [2024-07-12 16:02:35.278182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.630 [2024-07-12 16:02:35.281126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.630 [2024-07-12 16:02:35.290375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.630 [2024-07-12 16:02:35.290869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.630 [2024-07-12 16:02:35.290911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.630 [2024-07-12 16:02:35.290927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.630 [2024-07-12 16:02:35.291177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.630 [2024-07-12 16:02:35.291401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.630 [2024-07-12 16:02:35.291422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.630 [2024-07-12 16:02:35.291435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.630 [2024-07-12 16:02:35.294396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.630 [2024-07-12 16:02:35.303665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.630 [2024-07-12 16:02:35.304122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.630 [2024-07-12 16:02:35.304164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.630 [2024-07-12 16:02:35.304180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.630 [2024-07-12 16:02:35.304417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.630 [2024-07-12 16:02:35.304637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.630 [2024-07-12 16:02:35.304670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.630 [2024-07-12 16:02:35.304682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.630 [2024-07-12 16:02:35.307611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.630 [2024-07-12 16:02:35.316763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.630 [2024-07-12 16:02:35.317178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.630 [2024-07-12 16:02:35.317219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.630 [2024-07-12 16:02:35.317240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.630 [2024-07-12 16:02:35.317495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.631 [2024-07-12 16:02:35.317729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.631 [2024-07-12 16:02:35.317747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.631 [2024-07-12 16:02:35.317759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.631 [2024-07-12 16:02:35.320658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.631 [2024-07-12 16:02:35.329969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.631 [2024-07-12 16:02:35.330467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.631 [2024-07-12 16:02:35.330494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.631 [2024-07-12 16:02:35.330523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.631 [2024-07-12 16:02:35.330756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.631 [2024-07-12 16:02:35.330964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.631 [2024-07-12 16:02:35.330982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.631 [2024-07-12 16:02:35.330994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.631 [2024-07-12 16:02:35.333903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.631 [2024-07-12 16:02:35.343074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.631 [2024-07-12 16:02:35.343549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.631 [2024-07-12 16:02:35.343578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.631 [2024-07-12 16:02:35.343594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.631 [2024-07-12 16:02:35.343845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.631 [2024-07-12 16:02:35.344038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.631 [2024-07-12 16:02:35.344056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.631 [2024-07-12 16:02:35.344068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.631 [2024-07-12 16:02:35.346990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.890 [2024-07-12 16:02:35.356789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.890 [2024-07-12 16:02:35.357356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.890 [2024-07-12 16:02:35.357387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.890 [2024-07-12 16:02:35.357404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.357628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.357838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.357861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.357874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.360820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.370046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.370464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.370492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.370509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.370742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.370949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.370968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.370980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.373893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.383166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.383633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.383660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.383676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.383913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.384105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.384123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.384135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.387026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.396255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.396718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.396758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.396775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.397011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.397204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.397222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.397235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.400120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.409617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.410024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.410052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.410068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.410323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.410537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.410557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.410569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.413473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.422869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.423290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.423339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.423358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.423609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.423817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.423835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.423847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.426828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.436029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.436448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.436475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.436506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.436758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.436951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.436969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.436981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.439983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.449275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.449694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.449720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.449739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.449968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.450162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.450180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.450192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.453117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.462429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.462916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.462957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.462973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.463225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.463445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.463465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.463477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.466364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.475651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.476169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.476217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.476232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.476484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.476696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.476714] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.476726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.479669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.488764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.489224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.891 [2024-07-12 16:02:35.489250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.891 [2024-07-12 16:02:35.489280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.891 [2024-07-12 16:02:35.489537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.891 [2024-07-12 16:02:35.489747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.891 [2024-07-12 16:02:35.489770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.891 [2024-07-12 16:02:35.489783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.891 [2024-07-12 16:02:35.492680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.891 [2024-07-12 16:02:35.501757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.891 [2024-07-12 16:02:35.502174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.502200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.502230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.502474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.502686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.502704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.502716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.505607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.514907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.515327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.515354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.515384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.515619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.515811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.515829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.515841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.518888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.528084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.528514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.528543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.528558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.528810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.529008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.529027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.529039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.532129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.541596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.542090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.542120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.542135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.542397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.542627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.542647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.542659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.545807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.554722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.555308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.555371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.555386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.555627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.555819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.555838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.555850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.558710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.567801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.568380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.568407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.568422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.568665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.568857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.568875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.568887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.571808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.580960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.581330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.581370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.581386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.581638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.581847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.581865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.581877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.584786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.594022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.594451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.594477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.594492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.594708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.594917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.594935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.594947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.597853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:05.892 [2024-07-12 16:02:35.607214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:05.892 [2024-07-12 16:02:35.607656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.892 [2024-07-12 16:02:35.607683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:05.892 [2024-07-12 16:02:35.607698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:05.892 [2024-07-12 16:02:35.607911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:05.892 [2024-07-12 16:02:35.608119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:05.892 [2024-07-12 16:02:35.608137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:05.892 [2024-07-12 16:02:35.608150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:05.892 [2024-07-12 16:02:35.611088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.151 [2024-07-12 16:02:35.620809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.151 [2024-07-12 16:02:35.621375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.151 [2024-07-12 16:02:35.621405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.151 [2024-07-12 16:02:35.621421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.151 [2024-07-12 16:02:35.621652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.151 [2024-07-12 16:02:35.621860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.151 [2024-07-12 16:02:35.621878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.151 [2024-07-12 16:02:35.621895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.151 [2024-07-12 16:02:35.625131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.151 [2024-07-12 16:02:35.633826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.151 [2024-07-12 16:02:35.634361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.151 [2024-07-12 16:02:35.634404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.151 [2024-07-12 16:02:35.634422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.151 [2024-07-12 16:02:35.634666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.151 [2024-07-12 16:02:35.634859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.151 [2024-07-12 16:02:35.634877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.151 [2024-07-12 16:02:35.634889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.151 [2024-07-12 16:02:35.637809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.151 [2024-07-12 16:02:35.646925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.151 [2024-07-12 16:02:35.647264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.151 [2024-07-12 16:02:35.647290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.151 [2024-07-12 16:02:35.647304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.151 [2024-07-12 16:02:35.647543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.151 [2024-07-12 16:02:35.647754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.151 [2024-07-12 16:02:35.647773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.151 [2024-07-12 16:02:35.647784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.151 [2024-07-12 16:02:35.650697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.151 [2024-07-12 16:02:35.659981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.151 [2024-07-12 16:02:35.660510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.151 [2024-07-12 16:02:35.660553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.151 [2024-07-12 16:02:35.660569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.151 [2024-07-12 16:02:35.660815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.151 [2024-07-12 16:02:35.661008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.151 [2024-07-12 16:02:35.661027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.151 [2024-07-12 16:02:35.661039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.151 [2024-07-12 16:02:35.663930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.151 [2024-07-12 16:02:35.673143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.151 [2024-07-12 16:02:35.673569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.151 [2024-07-12 16:02:35.673601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.151 [2024-07-12 16:02:35.673617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.151 [2024-07-12 16:02:35.673859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.674051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.674070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.674081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.676984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.686354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.686827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.686867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.686883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.687113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.687331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.687350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.687377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.690338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.699405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.699825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.699852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.699883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.700133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.700352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.700372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.700384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.703258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.712461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.712932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.712973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.712989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.713239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.713466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.713487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.713499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.716391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.725646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.726075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.726116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.726131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.726375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.726588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.726607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.726633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.729550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.738671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.739101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.739141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.739155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.739400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.739614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.739633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.739660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.742537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.751866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.752447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.752516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.752532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.752772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.752964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.752982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.752994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.755788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.764988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.765434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.765462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.765477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.765721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.765913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.765931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.765943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.768886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.778083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.778688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.778748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.778781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.778993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.779188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.779207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.779219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.782261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.791169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.791583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.791610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.791626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.791840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.792048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.792066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.792078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.795017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.804374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.804770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.804812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.804832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.805063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.152 [2024-07-12 16:02:35.805271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.152 [2024-07-12 16:02:35.805290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.152 [2024-07-12 16:02:35.805325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.152 [2024-07-12 16:02:35.808169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.152 [2024-07-12 16:02:35.817539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.152 [2024-07-12 16:02:35.818007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.152 [2024-07-12 16:02:35.818034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.152 [2024-07-12 16:02:35.818050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.152 [2024-07-12 16:02:35.818299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.153 [2024-07-12 16:02:35.818524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.153 [2024-07-12 16:02:35.818545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.153 [2024-07-12 16:02:35.818558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.153 [2024-07-12 16:02:35.821454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.153 [2024-07-12 16:02:35.830586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.153 [2024-07-12 16:02:35.831198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.153 [2024-07-12 16:02:35.831236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.153 [2024-07-12 16:02:35.831268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.153 [2024-07-12 16:02:35.831525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.153 [2024-07-12 16:02:35.831726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.153 [2024-07-12 16:02:35.831760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.153 [2024-07-12 16:02:35.831772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.153 [2024-07-12 16:02:35.834674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.153 [2024-07-12 16:02:35.843587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.153 [2024-07-12 16:02:35.843998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.153 [2024-07-12 16:02:35.844026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.153 [2024-07-12 16:02:35.844041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.153 [2024-07-12 16:02:35.844289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.153 [2024-07-12 16:02:35.844516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.153 [2024-07-12 16:02:35.844542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.153 [2024-07-12 16:02:35.844556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.153 [2024-07-12 16:02:35.847450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.153 [2024-07-12 16:02:35.856700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.153 [2024-07-12 16:02:35.857116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.153 [2024-07-12 16:02:35.857142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.153 [2024-07-12 16:02:35.857173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.153 [2024-07-12 16:02:35.857413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.153 [2024-07-12 16:02:35.857612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.153 [2024-07-12 16:02:35.857631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.153 [2024-07-12 16:02:35.857643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.153 [2024-07-12 16:02:35.860595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.153 [2024-07-12 16:02:35.869864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.153 [2024-07-12 16:02:35.870285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.153 [2024-07-12 16:02:35.870312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.153 [2024-07-12 16:02:35.870355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.153 [2024-07-12 16:02:35.870605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.153 [2024-07-12 16:02:35.870813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.153 [2024-07-12 16:02:35.870832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.153 [2024-07-12 16:02:35.870844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.153 [2024-07-12 16:02:35.873766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.411 [2024-07-12 16:02:35.883095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.411 [2024-07-12 16:02:35.883547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.411 [2024-07-12 16:02:35.883577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.411 [2024-07-12 16:02:35.883594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.411 [2024-07-12 16:02:35.883855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.411 [2024-07-12 16:02:35.884087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.411 [2024-07-12 16:02:35.884106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.411 [2024-07-12 16:02:35.884119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.411 [2024-07-12 16:02:35.887161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.411 [2024-07-12 16:02:35.896230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.411 [2024-07-12 16:02:35.896710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.411 [2024-07-12 16:02:35.896752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.411 [2024-07-12 16:02:35.896769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.411 [2024-07-12 16:02:35.897016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.411 [2024-07-12 16:02:35.897209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.411 [2024-07-12 16:02:35.897228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.411 [2024-07-12 16:02:35.897240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.411 [2024-07-12 16:02:35.900162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.411 [2024-07-12 16:02:35.909344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.411 [2024-07-12 16:02:35.909716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.411 [2024-07-12 16:02:35.909758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:35.909773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:35.910021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:35.910214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:35.910232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:35.910244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:35.913126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:35.922434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:35.922811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:35.922853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:35.922868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:35.923113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:35.923305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:35.923348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:35.923361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:35.926277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:35.935461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:35.935878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:35.935905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:35.935934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:35.936187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:35.936407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:35.936427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:35.936440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:35.939310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:35.948590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:35.949024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:35.949066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:35.949081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:35.949338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:35.949547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:35.949565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:35.949577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:35.952393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:35.961757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:35.962175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:35.962202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:35.962232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:35.962484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:35.962715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:35.962733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:35.962746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:35.965641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:35.974750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:35.975296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:35.975356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:35.975375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:35.975610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:35.975820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:35.975838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:35.975856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:35.978782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:35.987859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:35.988329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:35.988373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:35.988391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:35.988642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:35.988835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:35.988853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:35.988865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:35.991804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:36.000920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:36.001394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:36.001431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:36.001448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:36.001700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:36.001894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:36.001912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:36.001924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:36.004852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:36.014091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:36.014571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:36.014599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:36.014615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:36.014866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:36.015059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:36.015078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:36.015090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:36.018024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:36.027249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:36.027704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:36.027746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:36.027763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:36.028011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:36.028204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:36.028222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:36.028234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:36.031157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:36.040630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:36.041082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:36.041111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:36.041126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:36.041407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:36.041639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:36.041674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:36.041687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:36.044752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:36.053893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:36.054390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:36.054419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:36.054435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:36.054697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:36.054889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:36.054908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:36.054920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:36.057742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:36.066996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:36.067461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.412 [2024-07-12 16:02:36.067503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.412 [2024-07-12 16:02:36.067519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.412 [2024-07-12 16:02:36.067774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.412 [2024-07-12 16:02:36.067967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.412 [2024-07-12 16:02:36.067986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.412 [2024-07-12 16:02:36.067998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.412 [2024-07-12 16:02:36.070872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.412 [2024-07-12 16:02:36.080132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.412 [2024-07-12 16:02:36.080476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.413 [2024-07-12 16:02:36.080501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.413 [2024-07-12 16:02:36.080516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.413 [2024-07-12 16:02:36.080725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.413 [2024-07-12 16:02:36.080917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.413 [2024-07-12 16:02:36.080936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.413 [2024-07-12 16:02:36.080948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.413 [2024-07-12 16:02:36.083768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.413 [2024-07-12 16:02:36.093256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.413 [2024-07-12 16:02:36.093653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.413 [2024-07-12 16:02:36.093680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.413 [2024-07-12 16:02:36.093695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.413 [2024-07-12 16:02:36.093923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.413 [2024-07-12 16:02:36.094116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.413 [2024-07-12 16:02:36.094134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.413 [2024-07-12 16:02:36.094146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.413 [2024-07-12 16:02:36.097037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.413 [2024-07-12 16:02:36.106356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.413 [2024-07-12 16:02:36.106745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.413 [2024-07-12 16:02:36.106786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.413 [2024-07-12 16:02:36.106801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.413 [2024-07-12 16:02:36.107052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.413 [2024-07-12 16:02:36.107259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.413 [2024-07-12 16:02:36.107277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.413 [2024-07-12 16:02:36.107294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.413 [2024-07-12 16:02:36.110192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.413 [2024-07-12 16:02:36.119517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.413 [2024-07-12 16:02:36.120045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.413 [2024-07-12 16:02:36.120087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.413 [2024-07-12 16:02:36.120103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.413 [2024-07-12 16:02:36.120359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.413 [2024-07-12 16:02:36.120553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.413 [2024-07-12 16:02:36.120571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.413 [2024-07-12 16:02:36.120583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.413 [2024-07-12 16:02:36.123363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.413 [2024-07-12 16:02:36.132574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.413 [2024-07-12 16:02:36.133042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.413 [2024-07-12 16:02:36.133085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.413 [2024-07-12 16:02:36.133100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.413 [2024-07-12 16:02:36.133358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.413 [2024-07-12 16:02:36.133571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.413 [2024-07-12 16:02:36.133590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.413 [2024-07-12 16:02:36.133603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.413 [2024-07-12 16:02:36.136657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.146100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.146556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.146587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.146604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.146855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.147063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.147082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.147094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.150290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.159498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.160111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.160195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.160441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.160667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.160685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.160697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.163741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.172836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.173381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.173411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.173427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.173678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.173870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.173889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.173901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.176900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.186035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.186446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.186475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.186491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.186731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.186938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.186956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.186968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.189862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.199670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.200187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.200238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.200254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.200494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.200738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.200757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.200769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.203842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.212979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.213589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.213619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.213635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.213868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.214060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.214078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.214090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.217109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.226170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.226608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.226651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.226666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.226931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.227123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.227141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.227153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.230128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.239445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.239863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.239889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.672 [2024-07-12 16:02:36.239904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.672 [2024-07-12 16:02:36.240117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.672 [2024-07-12 16:02:36.240350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.672 [2024-07-12 16:02:36.240369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.672 [2024-07-12 16:02:36.240382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.672 [2024-07-12 16:02:36.243265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.672 [2024-07-12 16:02:36.252668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.672 [2024-07-12 16:02:36.253118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.672 [2024-07-12 16:02:36.253165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.253180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.253403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.253617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.253635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.253647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.256550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.265861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.266372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.266401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.266417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.266668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.266876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.266895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.266908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.269874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.279039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.279510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.279538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.279554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.279807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.280000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.280018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.280030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.282950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.292715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.293217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.293264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.293286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.293513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.293771] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.293790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.293802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.296863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.306034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.306474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.306503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.306519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.306771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.306964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.306982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.306994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.310036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.319410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.319853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.319893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.319908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.320156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.320410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.320432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.320445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.323463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.332642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.333236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.333298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.333313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.333576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.333801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.333824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.333837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.336768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.345833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.346454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.346492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.346525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.346757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.346951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.346970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.346982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.349897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.358940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.359302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.359352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.359369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.359607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.359821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.359840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.359852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.362792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.372076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.372550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.372577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.372608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.372861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.373053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.373072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.373084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.376012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.385198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.385645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.385672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.673 [2024-07-12 16:02:36.385703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.673 [2024-07-12 16:02:36.385954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.673 [2024-07-12 16:02:36.386146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.673 [2024-07-12 16:02:36.386164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.673 [2024-07-12 16:02:36.386176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.673 [2024-07-12 16:02:36.389064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.673 [2024-07-12 16:02:36.398802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.673 [2024-07-12 16:02:36.399256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.673 [2024-07-12 16:02:36.399286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.674 [2024-07-12 16:02:36.399304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.932 [2024-07-12 16:02:36.399629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.932 [2024-07-12 16:02:36.399865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.932 [2024-07-12 16:02:36.399887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.932 [2024-07-12 16:02:36.399899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.932 [2024-07-12 16:02:36.402838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.932 [2024-07-12 16:02:36.412103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.932 [2024-07-12 16:02:36.412490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.932 [2024-07-12 16:02:36.412534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.932 [2024-07-12 16:02:36.412550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.932 [2024-07-12 16:02:36.412782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.932 [2024-07-12 16:02:36.412982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.932 [2024-07-12 16:02:36.413000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.932 [2024-07-12 16:02:36.413013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.932 [2024-07-12 16:02:36.416010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.932 [2024-07-12 16:02:36.425198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.932 [2024-07-12 16:02:36.425595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.932 [2024-07-12 16:02:36.425638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.932 [2024-07-12 16:02:36.425653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.932 [2024-07-12 16:02:36.425889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.932 [2024-07-12 16:02:36.426082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.426101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.426113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.429019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.438372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.438813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.438840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.438855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.439090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.439282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.439323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.439338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.442252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.451612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.452034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.452063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.452079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.452344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.452563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.452582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.452594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.455530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.464760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.465179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.465205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.465236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.465480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.465692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.465711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.465728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.468702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.478007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.478412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.478441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.478457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.478710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.478902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.478921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.478933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.481788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.491034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.491468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.491495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.491510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.491743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.491935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.491953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.491965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.494832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.504209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.504690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.504733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.504750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.505002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.505195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.505213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.505225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.508190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.517192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.517906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.517956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.517973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.518200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.518440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.518470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.518483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.521419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.530379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.530818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.530872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.530888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.531147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.531367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.531388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.531401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.534278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.543686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.544108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.544135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.544165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.544408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.544607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.544627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.544653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.547574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.556960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.557365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.557393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.557409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.557656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.557855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.933 [2024-07-12 16:02:36.557873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.933 [2024-07-12 16:02:36.557885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.933 [2024-07-12 16:02:36.560862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.933 [2024-07-12 16:02:36.570253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.933 [2024-07-12 16:02:36.570654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.933 [2024-07-12 16:02:36.570680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.933 [2024-07-12 16:02:36.570695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.933 [2024-07-12 16:02:36.570903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.933 [2024-07-12 16:02:36.571095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.934 [2024-07-12 16:02:36.571113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.934 [2024-07-12 16:02:36.571125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.934 [2024-07-12 16:02:36.574018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.934 [2024-07-12 16:02:36.583440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.934 [2024-07-12 16:02:36.583924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.934 [2024-07-12 16:02:36.583966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.934 [2024-07-12 16:02:36.583982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.934 [2024-07-12 16:02:36.584233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.934 [2024-07-12 16:02:36.584456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.934 [2024-07-12 16:02:36.584476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.934 [2024-07-12 16:02:36.584489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.934 [2024-07-12 16:02:36.587405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.934 [2024-07-12 16:02:36.596670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.934 [2024-07-12 16:02:36.597148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.934 [2024-07-12 16:02:36.597190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.934 [2024-07-12 16:02:36.597206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.934 [2024-07-12 16:02:36.597443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.934 [2024-07-12 16:02:36.597681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.934 [2024-07-12 16:02:36.597700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.934 [2024-07-12 16:02:36.597711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.934 [2024-07-12 16:02:36.600557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.934 [2024-07-12 16:02:36.609831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.934 [2024-07-12 16:02:36.610369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.934 [2024-07-12 16:02:36.610397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.934 [2024-07-12 16:02:36.610412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.934 [2024-07-12 16:02:36.610653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.934 [2024-07-12 16:02:36.610846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.934 [2024-07-12 16:02:36.610864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.934 [2024-07-12 16:02:36.610876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.934 [2024-07-12 16:02:36.613659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.934 [2024-07-12 16:02:36.623109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.934 [2024-07-12 16:02:36.623546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.934 [2024-07-12 16:02:36.623574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.934 [2024-07-12 16:02:36.623590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.934 [2024-07-12 16:02:36.623852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.934 [2024-07-12 16:02:36.624045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.934 [2024-07-12 16:02:36.624063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.934 [2024-07-12 16:02:36.624075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.934 [2024-07-12 16:02:36.626974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.934 [2024-07-12 16:02:36.636119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.934 [2024-07-12 16:02:36.636542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.934 [2024-07-12 16:02:36.636569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.934 [2024-07-12 16:02:36.636599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.934 [2024-07-12 16:02:36.636847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.934 [2024-07-12 16:02:36.637039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.934 [2024-07-12 16:02:36.637058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.934 [2024-07-12 16:02:36.637070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.934 [2024-07-12 16:02:36.639959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.934 [2024-07-12 16:02:36.649201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.934 [2024-07-12 16:02:36.649625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.934 [2024-07-12 16:02:36.649656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:06.934 [2024-07-12 16:02:36.649687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:06.934 [2024-07-12 16:02:36.649932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:06.934 [2024-07-12 16:02:36.650125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.934 [2024-07-12 16:02:36.650143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.934 [2024-07-12 16:02:36.650155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.934 [2024-07-12 16:02:36.653080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.662741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.663164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.663192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.663223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.663470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.663684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.663703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.663715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.666863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.675986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.676395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.676424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.676440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.676688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.676881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.676899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.676911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.679853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.689093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.689546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.689587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.689603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.689835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.690033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.690051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.690063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.692993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.702157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.702608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.702659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.702674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.702933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.703125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.703144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.703156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.705994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.715212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.715656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.715684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.715714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.715961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.716154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.716172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.716184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.719126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.728327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.728761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.728801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.728816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.729063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.729270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.729288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.729324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.732249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.741426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.741798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.741839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.741854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.742102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.742335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.742354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.742367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.745164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.754551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.754969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.754995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.755025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.755272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.755493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.755514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.755527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.758429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.767566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.768175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.193 [2024-07-12 16:02:36.768214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.193 [2024-07-12 16:02:36.768246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.193 [2024-07-12 16:02:36.768476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.193 [2024-07-12 16:02:36.768692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.193 [2024-07-12 16:02:36.768710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.193 [2024-07-12 16:02:36.768723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.193 [2024-07-12 16:02:36.771607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.193 [2024-07-12 16:02:36.780583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.193 [2024-07-12 16:02:36.781056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.781098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.781120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.781369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.781568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.781587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.781600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.784493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.793878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.794280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.794334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.794368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.794613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.794827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.794846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.794858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.797969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.806955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.807364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.807394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.807410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.807660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.807852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.807870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.807883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.810783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.820129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.820635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.820662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.820678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.820924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.821117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.821139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.821152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.824043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.833160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.833674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.833701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.833732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.833981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.834174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.834192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.834204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.837092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.846257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.846662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.846703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.846718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.846984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.847177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.847195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.847207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.850094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.859335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.859754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.859781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.859812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.860064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.860277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.860295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.860307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.863211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.872420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.872837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.872864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.872880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.873100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.873334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.873354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.873382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.876280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.885529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.885942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.885968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.885997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.886244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.886466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.886486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.886499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.889393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.898654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.899008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.899034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.899049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.899277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.899498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.899518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.899530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.902425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.194 [2024-07-12 16:02:36.911732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.194 [2024-07-12 16:02:36.912153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.194 [2024-07-12 16:02:36.912180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.194 [2024-07-12 16:02:36.912211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.194 [2024-07-12 16:02:36.912476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.194 [2024-07-12 16:02:36.912689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.194 [2024-07-12 16:02:36.912708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.194 [2024-07-12 16:02:36.912720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.194 [2024-07-12 16:02:36.915604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:36.924817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:36.925196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:36.925238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:36.925253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:36.925533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.453 [2024-07-12 16:02:36.925774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.453 [2024-07-12 16:02:36.925804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.453 [2024-07-12 16:02:36.925828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.453 [2024-07-12 16:02:36.928878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:36.937904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:36.938352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:36.938392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:36.938408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:36.938639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.453 [2024-07-12 16:02:36.938831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.453 [2024-07-12 16:02:36.938850] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.453 [2024-07-12 16:02:36.938862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.453 [2024-07-12 16:02:36.941786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:36.951063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:36.951480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:36.951507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:36.951539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:36.951786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.453 [2024-07-12 16:02:36.951979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.453 [2024-07-12 16:02:36.951997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.453 [2024-07-12 16:02:36.952014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.453 [2024-07-12 16:02:36.954963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:36.964088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:36.964509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:36.964535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:36.964565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:36.964812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.453 [2024-07-12 16:02:36.965004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.453 [2024-07-12 16:02:36.965022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.453 [2024-07-12 16:02:36.965034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.453 [2024-07-12 16:02:36.967864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:36.977077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:36.977548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:36.977590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:36.977606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:36.977857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.453 [2024-07-12 16:02:36.978050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.453 [2024-07-12 16:02:36.978068] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.453 [2024-07-12 16:02:36.978080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.453 [2024-07-12 16:02:36.980967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:36.990281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:36.990655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:36.990696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:36.990711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:36.990958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.453 [2024-07-12 16:02:36.991151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.453 [2024-07-12 16:02:36.991168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.453 [2024-07-12 16:02:36.991181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.453 [2024-07-12 16:02:36.994067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:37.003394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:37.003809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:37.003838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:37.003869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:37.004096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.453 [2024-07-12 16:02:37.004289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.453 [2024-07-12 16:02:37.004307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.453 [2024-07-12 16:02:37.004341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.453 [2024-07-12 16:02:37.007237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.453 [2024-07-12 16:02:37.016562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.453 [2024-07-12 16:02:37.017389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-07-12 16:02:37.017417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.453 [2024-07-12 16:02:37.017432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.453 [2024-07-12 16:02:37.017644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.017837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.017856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.017868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.020765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.029689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.030126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.030155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.030171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.030463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.030696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.030715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.030727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.033591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.042839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.043418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.043447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.043477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.043710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.043923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.043942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.043954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.047059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.056033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.056453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.056481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.056511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.056763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.056956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.056974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.056986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.059888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.069149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.069644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.069671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.069686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.069951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.070143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.070161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.070174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.073133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.082390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.082875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.082917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.082933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.083185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.083406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.083427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.083439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.086380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.095583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.095966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.095993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.096008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.096237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.096461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.096482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.096494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.099391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.108770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.109217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.109266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.109281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.109533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.109726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.109744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.109756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.112537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.121919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.122322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.122364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.122379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.122613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.122822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.122841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.122853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.125793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.134918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.135338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.135365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.135401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.135650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.135843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.135861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.135873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.138811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.148215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.148672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.148700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.454 [2024-07-12 16:02:37.148716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.454 [2024-07-12 16:02:37.148951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.454 [2024-07-12 16:02:37.149143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.454 [2024-07-12 16:02:37.149162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.454 [2024-07-12 16:02:37.149174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.454 [2024-07-12 16:02:37.152060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.454 [2024-07-12 16:02:37.161608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.454 [2024-07-12 16:02:37.162058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-07-12 16:02:37.162086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.455 [2024-07-12 16:02:37.162102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.455 [2024-07-12 16:02:37.162355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.455 [2024-07-12 16:02:37.162574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.455 [2024-07-12 16:02:37.162595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.455 [2024-07-12 16:02:37.162622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.455 [2024-07-12 16:02:37.165706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.455 [2024-07-12 16:02:37.174969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.455 [2024-07-12 16:02:37.175431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-07-12 16:02:37.175459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.455 [2024-07-12 16:02:37.175475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.455 [2024-07-12 16:02:37.175717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.455 [2024-07-12 16:02:37.175910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.455 [2024-07-12 16:02:37.175933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.455 [2024-07-12 16:02:37.175945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.455 [2024-07-12 16:02:37.179162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.714 [2024-07-12 16:02:37.188623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.714 [2024-07-12 16:02:37.189055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.714 [2024-07-12 16:02:37.189085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.714 [2024-07-12 16:02:37.189116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.714 [2024-07-12 16:02:37.189386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.714 [2024-07-12 16:02:37.189606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.714 [2024-07-12 16:02:37.189644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.714 [2024-07-12 16:02:37.189656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.714 [2024-07-12 16:02:37.192655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.714 [2024-07-12 16:02:37.201715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.714 [2024-07-12 16:02:37.202117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.714 [2024-07-12 16:02:37.202143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.714 [2024-07-12 16:02:37.202158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.714 [2024-07-12 16:02:37.202399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.714 [2024-07-12 16:02:37.202605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.714 [2024-07-12 16:02:37.202625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.714 [2024-07-12 16:02:37.202637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.714 [2024-07-12 16:02:37.205560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.714 [2024-07-12 16:02:37.214720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.714 [2024-07-12 16:02:37.215127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.714 [2024-07-12 16:02:37.215155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.714 [2024-07-12 16:02:37.215171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.714 [2024-07-12 16:02:37.215433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.714 [2024-07-12 16:02:37.215653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.714 [2024-07-12 16:02:37.215672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.714 [2024-07-12 16:02:37.215685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.714 [2024-07-12 16:02:37.218579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.714 [2024-07-12 16:02:37.227940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.714 [2024-07-12 16:02:37.228405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.714 [2024-07-12 16:02:37.228448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.714 [2024-07-12 16:02:37.228464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.714 [2024-07-12 16:02:37.228728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.714 [2024-07-12 16:02:37.228921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.714 [2024-07-12 16:02:37.228939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.714 [2024-07-12 16:02:37.228951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.714 [2024-07-12 16:02:37.231817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.714 [2024-07-12 16:02:37.240985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.714 [2024-07-12 16:02:37.241405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.714 [2024-07-12 16:02:37.241448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.714 [2024-07-12 16:02:37.241464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.714 [2024-07-12 16:02:37.241715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.241908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.241925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.241937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.244844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.254167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.254599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.254640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.254656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.254890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.255083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.255102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.255114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.257934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.267300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.267912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.267950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.267987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.268218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.268465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.268487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.268501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.271472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.280588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.281083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.281126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.281143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.281407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.281613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.281632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.281645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.284543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.293754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.294128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.294155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.294184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.294471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.294690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.294708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.294720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.297833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.306965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.307499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.307541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.307556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.307800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.307992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.308015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.308028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.310930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.320235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.320688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.320731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.320749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.320991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.321200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.321218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.321231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.324233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.333375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.333852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.333904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.333919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.334178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.334399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.334419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.334432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.337329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.346554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.346995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.347037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.347053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.347303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.347525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.347544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.347556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.350477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.359745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.360166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.360192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.360220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.360465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.360699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.360717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.360729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.363640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.372926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.373344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.373372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.715 [2024-07-12 16:02:37.373388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.715 [2024-07-12 16:02:37.373627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.715 [2024-07-12 16:02:37.373837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.715 [2024-07-12 16:02:37.373856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.715 [2024-07-12 16:02:37.373868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.715 [2024-07-12 16:02:37.376790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.715 [2024-07-12 16:02:37.386016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.715 [2024-07-12 16:02:37.386419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.715 [2024-07-12 16:02:37.386447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.716 [2024-07-12 16:02:37.386462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.716 [2024-07-12 16:02:37.386716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.716 [2024-07-12 16:02:37.386908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.716 [2024-07-12 16:02:37.386926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.716 [2024-07-12 16:02:37.386939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.716 [2024-07-12 16:02:37.389837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.716 [2024-07-12 16:02:37.399254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.716 [2024-07-12 16:02:37.399657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.716 [2024-07-12 16:02:37.399685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.716 [2024-07-12 16:02:37.399702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.716 [2024-07-12 16:02:37.399948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.716 [2024-07-12 16:02:37.400156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.716 [2024-07-12 16:02:37.400175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.716 [2024-07-12 16:02:37.400187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.716 [2024-07-12 16:02:37.403125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.716 [2024-07-12 16:02:37.412451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.716 [2024-07-12 16:02:37.412952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.716 [2024-07-12 16:02:37.412980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.716 [2024-07-12 16:02:37.413010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.716 [2024-07-12 16:02:37.413255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.716 [2024-07-12 16:02:37.413499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.716 [2024-07-12 16:02:37.413520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.716 [2024-07-12 16:02:37.413533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.716 [2024-07-12 16:02:37.416448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.716 [2024-07-12 16:02:37.425580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.716 [2024-07-12 16:02:37.425980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.716 [2024-07-12 16:02:37.426007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.716 [2024-07-12 16:02:37.426036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.716 [2024-07-12 16:02:37.426283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.716 [2024-07-12 16:02:37.426522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.716 [2024-07-12 16:02:37.426543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.716 [2024-07-12 16:02:37.426556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:07.716 [2024-07-12 16:02:37.429469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:07.716 [2024-07-12 16:02:37.438900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:07.716 [2024-07-12 16:02:37.439322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.716 [2024-07-12 16:02:37.439352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:07.716 [2024-07-12 16:02:37.439369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:07.716 [2024-07-12 16:02:37.439584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:07.716 [2024-07-12 16:02:37.439884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:07.716 [2024-07-12 16:02:37.439925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:07.716 [2024-07-12 16:02:37.439943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.024 [2024-07-12 16:02:37.443390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.024 [2024-07-12 16:02:37.452554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.024 [2024-07-12 16:02:37.452935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.024 [2024-07-12 16:02:37.452966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.024 [2024-07-12 16:02:37.452983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.024 [2024-07-12 16:02:37.453212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.024 [2024-07-12 16:02:37.453456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.024 [2024-07-12 16:02:37.453479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.024 [2024-07-12 16:02:37.453492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.024 [2024-07-12 16:02:37.456662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.024 [2024-07-12 16:02:37.465759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.024 [2024-07-12 16:02:37.466190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.024 [2024-07-12 16:02:37.466232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.024 [2024-07-12 16:02:37.466249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.024 [2024-07-12 16:02:37.466501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.024 [2024-07-12 16:02:37.466733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.024 [2024-07-12 16:02:37.466752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.024 [2024-07-12 16:02:37.466764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.024 [2024-07-12 16:02:37.469783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.024 [2024-07-12 16:02:37.478958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.024 [2024-07-12 16:02:37.479394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.024 [2024-07-12 16:02:37.479437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.024 [2024-07-12 16:02:37.479452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.024 [2024-07-12 16:02:37.479707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.024 [2024-07-12 16:02:37.479916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.024 [2024-07-12 16:02:37.479934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.024 [2024-07-12 16:02:37.479946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.024 [2024-07-12 16:02:37.482874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.024 [2024-07-12 16:02:37.491962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.024 [2024-07-12 16:02:37.492384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.492420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.492452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.492685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.492878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.492896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.492909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.495865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.504982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.505398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.505427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.505442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.505688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.505881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.505899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.505911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.508860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.518138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.518565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.518593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.518624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.518873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.519065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.519083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.519095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.521953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.531154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.531531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.531573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.531587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.531833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.532031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.532049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.532061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.535005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.544282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.544690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.544717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.544747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.544991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.545188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.545207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.545219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.548372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.557436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.557876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.557903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.557934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.558183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.558402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.558422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.558435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.561267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.570514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.570933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.570959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.570991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.571239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.571464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.571484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.571497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.574399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.583655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.584072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.584098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.584128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.584371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.584576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.584595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.584608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.587517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.596786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.597267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.597309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.597336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.597575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.597800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.597819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.597831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.600689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.609955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.610487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.610514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.610546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.610791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.610984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.611002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.611015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.613953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.623193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.623676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.025 [2024-07-12 16:02:37.623717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.025 [2024-07-12 16:02:37.623738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.025 [2024-07-12 16:02:37.623986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.025 [2024-07-12 16:02:37.624179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.025 [2024-07-12 16:02:37.624197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.025 [2024-07-12 16:02:37.624209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.025 [2024-07-12 16:02:37.627164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.025 [2024-07-12 16:02:37.636386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.025 [2024-07-12 16:02:37.636952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.637013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.637028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.637269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.637507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.637528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.637541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.640453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.649538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.650138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.650200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.650214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.650466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.650678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.650696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.650708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.653664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.662687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.663188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.663241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.663256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.663509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.663722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.663745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.663758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.666730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.675805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.676219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.676245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.676275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.676554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.676765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.676784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.676796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.679738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.689062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.689466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.689493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.689507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.689721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.689929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.689948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.689960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.692896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.702195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.702604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.702637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.702653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.702903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.703095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.703113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.703125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.706069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.715371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.715836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.715889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.715904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.716162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.716381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.716400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.716413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.719283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.728789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.729283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.729346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.729363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.729603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.729816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.729835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.729848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.732908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.026 [2024-07-12 16:02:37.742075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.026 [2024-07-12 16:02:37.742473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.026 [2024-07-12 16:02:37.742504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.026 [2024-07-12 16:02:37.742522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.026 [2024-07-12 16:02:37.742773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.026 [2024-07-12 16:02:37.742966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.026 [2024-07-12 16:02:37.742984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.026 [2024-07-12 16:02:37.742996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.026 [2024-07-12 16:02:37.746126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.285 [2024-07-12 16:02:37.755499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.285 [2024-07-12 16:02:37.756038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.285 [2024-07-12 16:02:37.756093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.285 [2024-07-12 16:02:37.756109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.285 [2024-07-12 16:02:37.756397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.285 [2024-07-12 16:02:37.756645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.285 [2024-07-12 16:02:37.756667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.285 [2024-07-12 16:02:37.756681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.285 [2024-07-12 16:02:37.759944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.285 [2024-07-12 16:02:37.768724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.285 [2024-07-12 16:02:37.769193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.285 [2024-07-12 16:02:37.769236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.285 [2024-07-12 16:02:37.769253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.285 [2024-07-12 16:02:37.769492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.285 [2024-07-12 16:02:37.769728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.285 [2024-07-12 16:02:37.769747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.285 [2024-07-12 16:02:37.769759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.285 [2024-07-12 16:02:37.772693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.285 [2024-07-12 16:02:37.781926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.285 [2024-07-12 16:02:37.782448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.285 [2024-07-12 16:02:37.782477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.285 [2024-07-12 16:02:37.782493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.285 [2024-07-12 16:02:37.782742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.285 [2024-07-12 16:02:37.782935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.285 [2024-07-12 16:02:37.782953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.285 [2024-07-12 16:02:37.782966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.285 [2024-07-12 16:02:37.785929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.285 [2024-07-12 16:02:37.794938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.285 [2024-07-12 16:02:37.795415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.285 [2024-07-12 16:02:37.795444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.285 [2024-07-12 16:02:37.795460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.285 [2024-07-12 16:02:37.795703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.285 [2024-07-12 16:02:37.795942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.285 [2024-07-12 16:02:37.795965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.285 [2024-07-12 16:02:37.795983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.285 [2024-07-12 16:02:37.799237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.285 [2024-07-12 16:02:37.808076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.808508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.808552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.808569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.808816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.809009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.809027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.809039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.811857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.821268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.821741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.821768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.821800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.822050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.822243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.822261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.822273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.825180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.834333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.834881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.834933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.834950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.835181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.835420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.835442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.835455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.838357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.847459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.847890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.847933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.847951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.848198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.848436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.848457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.848470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.851395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.860531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.860971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.861014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.861030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.861271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.861510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.861530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.861543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.864439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.873599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.874059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.874101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.874117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.874388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.874595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.874614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.874626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.877522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.886689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.887090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.887116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.887131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.887377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.887582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.887602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.887615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.890508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.899871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.900239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.900281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.900296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.900574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.900786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.900805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.900817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.903597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.912980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.913625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.913691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.913707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.913930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.914122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.914141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.914153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.916974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.926017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.926415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.926442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.926457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.926670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.926877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.926895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.926912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.929836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.939421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.286 [2024-07-12 16:02:37.939862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.286 [2024-07-12 16:02:37.939902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.286 [2024-07-12 16:02:37.939918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.286 [2024-07-12 16:02:37.940152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.286 [2024-07-12 16:02:37.940393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.286 [2024-07-12 16:02:37.940414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.286 [2024-07-12 16:02:37.940428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.286 [2024-07-12 16:02:37.943431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.286 [2024-07-12 16:02:37.952513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.287 [2024-07-12 16:02:37.952912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.287 [2024-07-12 16:02:37.952938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.287 [2024-07-12 16:02:37.952952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.287 [2024-07-12 16:02:37.953181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.287 [2024-07-12 16:02:37.953440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.287 [2024-07-12 16:02:37.953476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.287 [2024-07-12 16:02:37.953490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.287 [2024-07-12 16:02:37.956406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.287 [2024-07-12 16:02:37.965587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.287 [2024-07-12 16:02:37.966135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.287 [2024-07-12 16:02:37.966189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.287 [2024-07-12 16:02:37.966208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.287 [2024-07-12 16:02:37.966476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.287 [2024-07-12 16:02:37.966715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.287 [2024-07-12 16:02:37.966734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.287 [2024-07-12 16:02:37.966746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.287 [2024-07-12 16:02:37.969647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.287 [2024-07-12 16:02:37.978689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.287 [2024-07-12 16:02:37.979124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.287 [2024-07-12 16:02:37.979172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.287 [2024-07-12 16:02:37.979190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.287 [2024-07-12 16:02:37.979448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.287 [2024-07-12 16:02:37.979668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.287 [2024-07-12 16:02:37.979687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.287 [2024-07-12 16:02:37.979714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.287 [2024-07-12 16:02:37.982600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.287 [2024-07-12 16:02:37.991772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.287 [2024-07-12 16:02:37.992193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.287 [2024-07-12 16:02:37.992235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.287 [2024-07-12 16:02:37.992252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.287 [2024-07-12 16:02:37.992503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.287 [2024-07-12 16:02:37.992719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.287 [2024-07-12 16:02:37.992737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.287 [2024-07-12 16:02:37.992749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.287 [2024-07-12 16:02:37.995645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.287 [2024-07-12 16:02:38.004925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.287 [2024-07-12 16:02:38.005454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.287 [2024-07-12 16:02:38.005495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.287 [2024-07-12 16:02:38.005511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.287 [2024-07-12 16:02:38.005760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.287 [2024-07-12 16:02:38.005954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.287 [2024-07-12 16:02:38.005973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.287 [2024-07-12 16:02:38.005984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.287 [2024-07-12 16:02:38.008879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.546 [2024-07-12 16:02:38.018136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.546 [2024-07-12 16:02:38.018739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.546 [2024-07-12 16:02:38.018772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.546 [2024-07-12 16:02:38.018789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.546 [2024-07-12 16:02:38.019045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.546 [2024-07-12 16:02:38.019262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.546 [2024-07-12 16:02:38.019281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.546 [2024-07-12 16:02:38.019294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.546 [2024-07-12 16:02:38.022244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.546 [2024-07-12 16:02:38.031178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.546 [2024-07-12 16:02:38.031608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.546 [2024-07-12 16:02:38.031651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.546 [2024-07-12 16:02:38.031668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.546 [2024-07-12 16:02:38.031916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.546 [2024-07-12 16:02:38.032110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.546 [2024-07-12 16:02:38.032128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.546 [2024-07-12 16:02:38.032140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.546 [2024-07-12 16:02:38.035087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.546 [2024-07-12 16:02:38.044283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.546 [2024-07-12 16:02:38.044710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.546 [2024-07-12 16:02:38.044753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.546 [2024-07-12 16:02:38.044769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.546 [2024-07-12 16:02:38.045031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.546 [2024-07-12 16:02:38.045223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.546 [2024-07-12 16:02:38.045242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.546 [2024-07-12 16:02:38.045254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.546 [2024-07-12 16:02:38.048249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.546 [2024-07-12 16:02:38.057583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.546 [2024-07-12 16:02:38.058081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.546 [2024-07-12 16:02:38.058133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.546 [2024-07-12 16:02:38.058148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.546 [2024-07-12 16:02:38.058420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.546 [2024-07-12 16:02:38.058620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.546 [2024-07-12 16:02:38.058654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.546 [2024-07-12 16:02:38.058666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.546 [2024-07-12 16:02:38.061704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.546 [2024-07-12 16:02:38.070696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.546 [2024-07-12 16:02:38.071086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.546 [2024-07-12 16:02:38.071114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.546 [2024-07-12 16:02:38.071130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.546 [2024-07-12 16:02:38.071371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.546 [2024-07-12 16:02:38.071576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.546 [2024-07-12 16:02:38.071595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.546 [2024-07-12 16:02:38.071608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.546 [2024-07-12 16:02:38.074558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.546 [2024-07-12 16:02:38.083747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.546 [2024-07-12 16:02:38.084273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.546 [2024-07-12 16:02:38.084331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.546 [2024-07-12 16:02:38.084349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.546 [2024-07-12 16:02:38.084589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.546 [2024-07-12 16:02:38.084782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.546 [2024-07-12 16:02:38.084800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.546 [2024-07-12 16:02:38.084812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.546 [2024-07-12 16:02:38.087759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.546 [2024-07-12 16:02:38.096877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.546 [2024-07-12 16:02:38.097297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.546 [2024-07-12 16:02:38.097345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.546 [2024-07-12 16:02:38.097362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.546 [2024-07-12 16:02:38.097617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.546 [2024-07-12 16:02:38.097841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.546 [2024-07-12 16:02:38.097860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.546 [2024-07-12 16:02:38.097872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.546 [2024-07-12 16:02:38.100796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.109930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.110404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.110445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.110466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.110710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.110903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.110921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.110933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.113839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.123035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.123644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.123682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.123713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.123944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.124138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.124157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.124169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.127099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.136115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.136541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.136570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.136587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.136833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.137047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.137066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.137079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.140009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.149092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.149567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.149609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.149625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.149876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.150068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.150093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.150106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.153031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.162227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.162674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.162702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.162733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.162980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.163191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.163210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.163222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.166437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.175570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.175967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.176008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.176023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.176265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.176502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.176524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.176538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.179634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.188859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.189288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.189336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.189355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.189584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.189794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.189813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.189825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.192822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.202100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.202525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.202551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.202582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.202828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.203020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.203039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.203051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.206024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.215356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.215826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.215869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.215885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.216134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.216354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.216388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.216401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.219262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.228522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.228961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.228988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.229019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.229269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.229510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.229531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.229544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.547 [2024-07-12 16:02:38.232483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.547 [2024-07-12 16:02:38.241616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.547 [2024-07-12 16:02:38.242141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.547 [2024-07-12 16:02:38.242183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.547 [2024-07-12 16:02:38.242199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.547 [2024-07-12 16:02:38.242467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.547 [2024-07-12 16:02:38.242684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.547 [2024-07-12 16:02:38.242702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.547 [2024-07-12 16:02:38.242715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.548 [2024-07-12 16:02:38.245598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.548 [2024-07-12 16:02:38.254805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.548 [2024-07-12 16:02:38.255300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.548 [2024-07-12 16:02:38.255356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.548 [2024-07-12 16:02:38.255372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.548 [2024-07-12 16:02:38.255634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.548 [2024-07-12 16:02:38.255827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.548 [2024-07-12 16:02:38.255845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.548 [2024-07-12 16:02:38.255857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.548 [2024-07-12 16:02:38.258678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 129521 Killed "${NVMF_APP[@]}" "$@" 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=130483 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 130483 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 130483 ']' 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.548 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:08.548 [2024-07-12 16:02:38.268408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.548 [2024-07-12 16:02:38.268947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.548 [2024-07-12 16:02:38.268989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.548 [2024-07-12 16:02:38.269005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.548 [2024-07-12 16:02:38.269261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.548 [2024-07-12 16:02:38.269503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.548 [2024-07-12 16:02:38.269525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.548 [2024-07-12 16:02:38.269541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.548 [2024-07-12 16:02:38.273003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.807 [2024-07-12 16:02:38.282089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.807 [2024-07-12 16:02:38.282473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.807 [2024-07-12 16:02:38.282503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.807 [2024-07-12 16:02:38.282519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.807 [2024-07-12 16:02:38.282763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.807 [2024-07-12 16:02:38.282982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.807 [2024-07-12 16:02:38.283002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.807 [2024-07-12 16:02:38.283014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.807 [2024-07-12 16:02:38.286143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.807 [2024-07-12 16:02:38.295434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.807 [2024-07-12 16:02:38.295908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.807 [2024-07-12 16:02:38.295935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.807 [2024-07-12 16:02:38.295965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.807 [2024-07-12 16:02:38.296201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.807 [2024-07-12 16:02:38.296437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.807 [2024-07-12 16:02:38.296459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.807 [2024-07-12 16:02:38.296473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.807 [2024-07-12 16:02:38.299758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.807 [2024-07-12 16:02:38.308828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.807 [2024-07-12 16:02:38.309261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.807 [2024-07-12 16:02:38.309287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.807 [2024-07-12 16:02:38.309331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.807 [2024-07-12 16:02:38.309577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.807 [2024-07-12 16:02:38.309812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.807 [2024-07-12 16:02:38.309831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.807 [2024-07-12 16:02:38.309850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.807 [2024-07-12 16:02:38.310739] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:26:08.807 [2024-07-12 16:02:38.310810] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.807 [2024-07-12 16:02:38.312896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.807 [2024-07-12 16:02:38.322264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.807 [2024-07-12 16:02:38.322700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.807 [2024-07-12 16:02:38.322728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.807 [2024-07-12 16:02:38.322743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.807 [2024-07-12 16:02:38.323014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.807 [2024-07-12 16:02:38.323213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.807 [2024-07-12 16:02:38.323232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.807 [2024-07-12 16:02:38.323244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.807 [2024-07-12 16:02:38.326279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.807 [2024-07-12 16:02:38.335513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.807 [2024-07-12 16:02:38.335957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.807 [2024-07-12 16:02:38.335984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.807 [2024-07-12 16:02:38.336016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.807 [2024-07-12 16:02:38.336258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.807 [2024-07-12 16:02:38.336492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.807 [2024-07-12 16:02:38.336513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.807 [2024-07-12 16:02:38.336527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.807 [2024-07-12 16:02:38.339582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.807 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.808 [2024-07-12 16:02:38.348980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.349375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.349404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.349420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.349663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.349861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.349880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.349897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.352998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.362476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.362883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.362910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.362925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.363161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.363392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.363413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.363428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.366478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.375578] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:08.808 [2024-07-12 16:02:38.375655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.376091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.376118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.376134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.376382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.376631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.376651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.376664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.379682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.388841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.389400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.389455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.389717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.389918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.389937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.389952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.392940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.402108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.402532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.402560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.402576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.402816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.403014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.403033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.403046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.406063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.415439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.415945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.415974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.415990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.416244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.416478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.416499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.416512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.419514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.428780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.429189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.429217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.429232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.429487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.429710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.429729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.429743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.432769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.442160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.442750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.442786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.442806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.443067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.443278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.443313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.443339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.446308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.455571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.456002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.456030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.456046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.456304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.456534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.456554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.456568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.459583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.468820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.469320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.469350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.469365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.469606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.469821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.469841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.808 [2024-07-12 16:02:38.469853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.808 [2024-07-12 16:02:38.472899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.808 [2024-07-12 16:02:38.482021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.808 [2024-07-12 16:02:38.482485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.808 [2024-07-12 16:02:38.482515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.808 [2024-07-12 16:02:38.482532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.808 [2024-07-12 16:02:38.482750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.808 [2024-07-12 16:02:38.482968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.808 [2024-07-12 16:02:38.482989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.809 [2024-07-12 16:02:38.483004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.809 [2024-07-12 16:02:38.483219] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.809 [2024-07-12 16:02:38.483249] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.809 [2024-07-12 16:02:38.483264] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.809 [2024-07-12 16:02:38.483275] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.809 [2024-07-12 16:02:38.483285] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.809 [2024-07-12 16:02:38.483432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.809 [2024-07-12 16:02:38.483460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.809 [2024-07-12 16:02:38.483464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.809 [2024-07-12 16:02:38.486266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.809 [2024-07-12 16:02:38.495517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.809 [2024-07-12 16:02:38.496033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.809 [2024-07-12 16:02:38.496070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.809 [2024-07-12 16:02:38.496089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.809 [2024-07-12 16:02:38.496337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.809 [2024-07-12 16:02:38.496574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.809 [2024-07-12 16:02:38.496596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.809 [2024-07-12 16:02:38.496613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.809 [2024-07-12 16:02:38.499844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.809 [2024-07-12 16:02:38.509187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.809 [2024-07-12 16:02:38.509772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.809 [2024-07-12 16:02:38.509810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.809 [2024-07-12 16:02:38.509829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.809 [2024-07-12 16:02:38.510068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.809 [2024-07-12 16:02:38.510283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.809 [2024-07-12 16:02:38.510303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.809 [2024-07-12 16:02:38.510344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.809 [2024-07-12 16:02:38.513604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:08.809 [2024-07-12 16:02:38.522861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.809 [2024-07-12 16:02:38.523405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.809 [2024-07-12 16:02:38.523444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:08.809 [2024-07-12 16:02:38.523463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:08.809 [2024-07-12 16:02:38.523700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:08.809 [2024-07-12 16:02:38.523924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.809 [2024-07-12 16:02:38.523945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.809 [2024-07-12 16:02:38.523961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.809 [2024-07-12 16:02:38.527182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 [2024-07-12 16:02:38.536780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.537287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.537332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.537353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.537575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.537806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.537826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.537841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.068 [2024-07-12 16:02:38.541202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 [2024-07-12 16:02:38.550283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.550846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.550885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.550905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.551141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.551394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.551418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.551438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.068 [2024-07-12 16:02:38.554871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 [2024-07-12 16:02:38.563957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.564470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.564507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.564533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.564772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.564986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.565007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.565022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.068 [2024-07-12 16:02:38.568199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 [2024-07-12 16:02:38.577407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.577768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.577797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.577813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.578044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.578256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.578276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.578291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.068 [2024-07-12 16:02:38.581524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 [2024-07-12 16:02:38.590921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.591312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.591346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.591363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.591578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.591796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.591817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.591830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.068 [2024-07-12 16:02:38.595090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.068 [2024-07-12 16:02:38.604511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.604903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.604930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.604946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.605173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.605419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.605441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.605457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.068 [2024-07-12 16:02:38.608727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 [2024-07-12 16:02:38.618081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.618518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.618547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.618563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.618779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.619005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.619026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.619039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.068 [2024-07-12 16:02:38.622286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.068 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.068 [2024-07-12 16:02:38.630653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:09.068 [2024-07-12 16:02:38.631768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.068 [2024-07-12 16:02:38.632176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.068 [2024-07-12 16:02:38.632204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.068 [2024-07-12 16:02:38.632220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.068 [2024-07-12 16:02:38.632445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.068 [2024-07-12 16:02:38.632678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.068 [2024-07-12 16:02:38.632698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.068 [2024-07-12 16:02:38.632711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.069 [2024-07-12 16:02:38.635961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.069 [2024-07-12 16:02:38.645259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.069 [2024-07-12 16:02:38.645669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.069 [2024-07-12 16:02:38.645699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.069 [2024-07-12 16:02:38.645715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.069 [2024-07-12 16:02:38.645956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.069 [2024-07-12 16:02:38.646167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.069 [2024-07-12 16:02:38.646187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.069 [2024-07-12 16:02:38.646199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.069 [2024-07-12 16:02:38.649424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.069 [2024-07-12 16:02:38.658838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.069 [2024-07-12 16:02:38.659234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.069 [2024-07-12 16:02:38.659275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.069 [2024-07-12 16:02:38.659291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.069 [2024-07-12 16:02:38.659537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.069 [2024-07-12 16:02:38.659768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.069 [2024-07-12 16:02:38.659789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.069 [2024-07-12 16:02:38.659802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.069 [2024-07-12 16:02:38.663005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.069 [2024-07-12 16:02:38.672436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.069 [2024-07-12 16:02:38.672973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.069 [2024-07-12 16:02:38.673012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.069 [2024-07-12 16:02:38.673031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.069 [2024-07-12 16:02:38.673269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.069 [2024-07-12 16:02:38.673513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.069 [2024-07-12 16:02:38.673536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.069 [2024-07-12 16:02:38.673552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.069 [2024-07-12 16:02:38.676773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.069 Malloc0 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.069 [2024-07-12 16:02:38.686091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.069 [2024-07-12 16:02:38.686500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:09.069 [2024-07-12 16:02:38.686528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f46fe0 with addr=10.0.0.2, port=4420 00:26:09.069 [2024-07-12 16:02:38.686552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f46fe0 is same with the state(5) to be set 00:26:09.069 [2024-07-12 16:02:38.686768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f46fe0 (9): Bad file descriptor 00:26:09.069 [2024-07-12 16:02:38.687000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:09.069 [2024-07-12 16:02:38.687021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:09.069 [2024-07-12 16:02:38.687034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:09.069 [2024-07-12 16:02:38.690288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:09.069 [2024-07-12 16:02:38.697045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.069 [2024-07-12 16:02:38.699608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.069 16:02:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 129814 00:26:09.069 [2024-07-12 16:02:38.776701] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:19.029 00:26:19.029 Latency(us) 00:26:19.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:19.030 Verification LBA range: start 0x0 length 0x4000 00:26:19.030 Nvme1n1 : 15.05 6702.73 26.18 10256.01 0.00 7504.98 588.61 42913.94 00:26:19.030 =================================================================================================================== 00:26:19.030 Total : 6702.73 26.18 10256.01 0.00 7504.98 588.61 42913.94 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.030 rmmod nvme_tcp 00:26:19.030 rmmod nvme_fabrics 00:26:19.030 rmmod nvme_keyring 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 130483 ']' 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 130483 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 130483 ']' 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 130483 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130483 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130483' 00:26:19.030 killing process with pid 130483 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 130483 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 130483 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.030 16:02:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.929 16:02:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:20.929 00:26:20.929 real 0m22.998s 00:26:20.929 user 1m0.956s 00:26:20.929 sys 0m4.725s 00:26:20.929 16:02:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:20.929 16:02:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:20.929 ************************************ 00:26:20.930 END TEST nvmf_bdevperf 00:26:20.930 ************************************ 00:26:20.930 16:02:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:20.930 16:02:50 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:20.930 16:02:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:20.930 16:02:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:20.930 16:02:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:20.930 ************************************ 00:26:20.930 START TEST nvmf_target_disconnect 00:26:20.930 ************************************ 00:26:20.930 16:02:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:21.188 * Looking for test storage... 00:26:21.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.188 16:02:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.189 16:02:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.085 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.085 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.085 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.085 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.085 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.085 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.085 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:23.086 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:23.086 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:23.086 Found net devices under 0000:09:00.0: cvl_0_0 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:23.086 Found net devices under 0000:09:00.1: cvl_0_1 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.086 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:23.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:26:23.343 00:26:23.343 --- 10.0.0.2 ping statistics --- 00:26:23.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.343 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:23.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:26:23.343 00:26:23.343 --- 10.0.0.1 ping statistics --- 00:26:23.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.343 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.343 ************************************ 00:26:23.343 START TEST nvmf_target_disconnect_tc1 00:26:23.343 ************************************ 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:23.343 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.343 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.343 [2024-07-12 16:02:52.974065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.343 [2024-07-12 16:02:52.974142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4d340 with addr=10.0.0.2, port=4420 00:26:23.343 [2024-07-12 16:02:52.974190] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:23.344 [2024-07-12 16:02:52.974228] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:23.344 [2024-07-12 16:02:52.974241] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:23.344 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:23.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:23.344 Initializing NVMe Controllers 00:26:23.344 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:23.344 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:23.344 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:23.344 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:23.344 00:26:23.344 real 0m0.089s 00:26:23.344 user 0m0.040s 00:26:23.344 sys 0m0.049s 00:26:23.344 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:23.344 16:02:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:23.344 ************************************ 00:26:23.344 END TEST nvmf_target_disconnect_tc1 00:26:23.344 ************************************ 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:23.344 ************************************ 00:26:23.344 START TEST nvmf_target_disconnect_tc2 00:26:23.344 ************************************ 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=133632 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 133632 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 133632 ']' 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:23.344 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.601 [2024-07-12 16:02:53.085107] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:26:23.601 [2024-07-12 16:02:53.085189] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.601 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.601 [2024-07-12 16:02:53.147358] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:23.601 [2024-07-12 16:02:53.257594] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.601 [2024-07-12 16:02:53.257646] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.601 [2024-07-12 16:02:53.257678] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.601 [2024-07-12 16:02:53.257691] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.601 [2024-07-12 16:02:53.257701] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.601 [2024-07-12 16:02:53.258021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:23.601 [2024-07-12 16:02:53.258083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:23.601 [2024-07-12 16:02:53.258149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:23.601 [2024-07-12 16:02:53.258152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:23.858 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:23.858 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:23.858 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:23.858 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:23.858 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.858 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.859 Malloc0 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.859 [2024-07-12 16:02:53.446225] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.859 [2024-07-12 16:02:53.474500] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=133776 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:23.859 16:02:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.859 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.446 16:02:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 133632 00:26:26.446 16:02:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 [2024-07-12 16:02:55.499870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 [2024-07-12 16:02:55.500219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Write completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 [2024-07-12 16:02:55.500541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.446 starting I/O failed 00:26:26.446 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Write completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 Read completed with error (sct=0, sc=8) 00:26:26.447 starting I/O failed 00:26:26.447 [2024-07-12 16:02:55.500844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.447 [2024-07-12 16:02:55.501051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.501092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.501264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.501291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.501452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.501480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.501620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.501647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.501804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.501830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.501961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.501988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.502119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.502145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.502338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.502380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.502518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.502544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.502681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.502707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.502859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.502885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.503056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.503084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.503247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.503273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.503431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.503465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.503594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.503619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.503756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.503781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.503964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.503990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.504138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.504164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.504325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.504351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.504500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.504526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.504770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.504796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.504944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.504971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.505137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.505163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.505332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.505358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.505483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.505509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.505671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.505713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.505879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.505905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.506030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.506056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.506251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.506291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.506460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.506499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.447 [2024-07-12 16:02:55.506684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.447 [2024-07-12 16:02:55.506712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.447 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.506870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.506897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.507079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.507105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.507261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.507288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.507435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.507473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.507611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.507651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.507788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.507829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.507996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.508023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.508185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.508211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.508374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.508415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.508571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.508599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.508783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.508809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.508983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.509010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.509194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.509239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.509408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.509434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.509592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.509617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.509745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.509770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.509926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.509952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.510118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.510143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.510296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.510328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.510476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.510502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.510631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.510658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.510790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.510816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.510978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.511003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.511133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.511160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.511341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.511367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.511517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.511543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.511676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.511702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.511915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.511965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.512149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.512174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.512330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.512356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.512497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.512523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.512659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.512686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.512851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.512876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.513041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.513066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.513224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.513249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.513380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.513406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.513533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.513559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.513682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.513707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.513847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.513874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.448 [2024-07-12 16:02:55.514084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.448 [2024-07-12 16:02:55.514110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.448 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.514234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.514259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.514424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.514450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.514580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.514606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.514740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.514773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.514906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.514932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.515111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.515136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.515266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.515291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.515455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.515494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.515636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.515664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.515815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.515841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.516002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.516030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.516223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.516281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.516457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.516485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.516623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.516649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.516809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.516835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.516963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.516989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.517140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.517166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.517332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.517358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.517513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.517538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.517671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.517698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.517891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.517937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.518095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.518122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.518294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.518328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.518466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.518492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.518629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.518654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.518809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.518834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.519106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.519157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.519311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.519341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.519497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.519523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.519683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.519710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.519842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.519867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.520025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.520051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.520203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.520229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.520382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.520408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.520592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.520619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.520889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.520939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.521143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.521168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.521332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.521359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.521517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.521542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.449 [2024-07-12 16:02:55.521723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.449 [2024-07-12 16:02:55.521747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.449 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.521973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.521998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.522154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.522179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.522364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.522389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.522540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.522571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.522729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.522775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.522959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.523004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.523239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.523265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.523406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.523432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.523562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.523588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.523743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.523769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.524101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.524161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.524287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.524313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.524458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.524483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.524616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.524642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.524767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.524794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.524946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.524971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.525106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.525131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.525264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.525289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.525419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.525445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.525604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.525629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.525797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.525824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.525980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.526006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.526160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.526185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.526321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.526347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.526467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.526492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.526624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.526649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.526805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.526830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.526981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.527006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.527129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.527155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.527305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.527336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.527498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.527524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.450 [2024-07-12 16:02:55.527683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.450 [2024-07-12 16:02:55.527708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.450 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.527860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.527886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.528050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.528076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.528256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.528281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.528444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.528469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.528597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.528623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.528857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.528883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.529037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.529062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.529208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.529233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.529384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.529423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.529595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.529623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.529837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.529866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.530103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.530155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.530402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.530430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.530610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.530636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.530760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.530802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.530975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.531006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.531202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.531232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.531405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.531432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.531587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.531614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.531805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.531849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.532011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.532043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.532229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.532258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.532440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.532466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.532592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.532619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.532777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.532804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.532979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.533025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.533188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.533217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.533403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.533431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.533589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.533616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.533776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.533803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.533957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.533984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.534128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.534158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.534360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.534387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.534523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.534549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.534681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.534707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.534852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.534880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.535055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.535096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.535263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.535289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.535433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.451 [2024-07-12 16:02:55.535460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.451 qpair failed and we were unable to recover it. 00:26:26.451 [2024-07-12 16:02:55.535613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.535639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.535826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.535872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.536039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.536067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.536244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.536269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.536416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.536442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.536605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.536631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.536759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.536785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.536955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.536983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.537144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.537173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.537340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.537393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.537549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.537575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.537762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.537808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.538008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.538040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.538231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.538259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.538415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.538441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.538561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.538586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.538817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.538860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.539023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.539051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.539209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.539237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.539416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.539443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.539619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.539645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.539802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.539829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.539983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.540012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.540205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.540233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.540398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.540425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.540579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.540605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.540766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.540792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.540945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.540970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.541118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.541146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.541313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.541345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.541505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.541531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.541696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.541722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.541869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.541895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.542016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.542041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.542200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.542226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.542384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.542410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.542605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.542633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.542828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.542856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.543064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.452 [2024-07-12 16:02:55.543091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.452 qpair failed and we were unable to recover it. 00:26:26.452 [2024-07-12 16:02:55.543236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.543264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.543440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.543466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.543646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.543671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.543819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.543845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.543997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.544023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.544217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.544243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.544436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.544463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.544652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.544698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.544916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.544963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.545120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.545148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.545308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.545361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.545543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.545570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.545702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.545728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.545866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.545896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.546118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.546145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.546300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.546332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.546497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.546523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.546719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.546766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.546931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.546980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.547171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.547200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.547340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.547368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.547527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.547577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.547783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.547811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.548018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.548064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.548256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.548285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.548497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.548545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.548753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.548802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.549058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.549105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.549270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.549297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.549514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.549562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.549803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.549830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.549991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.550018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.550186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.550213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.550380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.550409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.550579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.550627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.550831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.550858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.551017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.551044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.551223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.551251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.453 qpair failed and we were unable to recover it. 00:26:26.453 [2024-07-12 16:02:55.551451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.453 [2024-07-12 16:02:55.551502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.551741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.551791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.551964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.552013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.552201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.552229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.552461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.552513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.552723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.552771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.552979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.553027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.553218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.553246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.553460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.553508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.553728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.553775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.554037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.554089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.554220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.554249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.554397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.554426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.554599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.554654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.554881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.554932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.555095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.555129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.555272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.555300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.555509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.555558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.555732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.555781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.555989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.556015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.556170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.556196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.556348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.556376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.556606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.556632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.556824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.556874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.557069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.557096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.557264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.557292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.557499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.557551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.557703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.557730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.557922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.557948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.558107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.558134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.558299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.558334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.558647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.558703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.558897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.558949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.559149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.559202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.559354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.559382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.559564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.559589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.559828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.559879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.560071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.454 [2024-07-12 16:02:55.560101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.454 qpair failed and we were unable to recover it. 00:26:26.454 [2024-07-12 16:02:55.560290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.560324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.560523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.560576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.560768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.560818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.561044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.561093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.561241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.561269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.561552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.561603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.561785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.561835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.562061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.562111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.562279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.562308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.562537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.562589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.562808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.562856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.563181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.563235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.563434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.563460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.563616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.563641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.563845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.563904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.564174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.564223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.564491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.564548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.564722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.564753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.564912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.564954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.565158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.565199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.565348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.565376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.565627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.565678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.565905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.565956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.566145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.566174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.566360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.566424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.566660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.566710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.566965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.567016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.567185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.567213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.567376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.567434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.567587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.567612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.567745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.567786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.568042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.455 [2024-07-12 16:02:55.568095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.455 qpair failed and we were unable to recover it. 00:26:26.455 [2024-07-12 16:02:55.568269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.568296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.568466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.568493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.568726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.568755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.569034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.569065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.569207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.569235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.569494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.569547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.569748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.569774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.569935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.569961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.570097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.570124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.570305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.570336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.570576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.570601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.570729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.570756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.571030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.571084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.571286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.571313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.571494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.571522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.571740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.571793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.572048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.572074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.572230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.572256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.572455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.572483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.572681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.572707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.572841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.572883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.573093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.573147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.573339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.573367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.573605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.573658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.573968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.574030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.574223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.574257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.574443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.574472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.574697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.574755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.575091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.575155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.575347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.575375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.575620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.575674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.575912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.575941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.576197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.576248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.576453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.576481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.576712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.576764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.577016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.577049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.577234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.577260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.456 [2024-07-12 16:02:55.577457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.456 [2024-07-12 16:02:55.577485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.456 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.577776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.577841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.578124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.578171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.578386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.578414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.578660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.578712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.579046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.579105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.579309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.579340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.579482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.579508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.579659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.579685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.579897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.579949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.580141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.580169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.580312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.580345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.580644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.580700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.580834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.580863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.581183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.581239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.581420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.581448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.581704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.581756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.582087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.582147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.582340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.582368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.582540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.582567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.582883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.582945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.583209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.583261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.583411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.583440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.583693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.583741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.583988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.584038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.584206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.584234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.584446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.584500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.584775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.584837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.585062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.585117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.585325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.585352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.585532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.585559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.585768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.585821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.586070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.586123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.586304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.586348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.586518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.586546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.586824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.586874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.587167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.587223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.587392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.587421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.457 qpair failed and we were unable to recover it. 00:26:26.457 [2024-07-12 16:02:55.587705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.457 [2024-07-12 16:02:55.587762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.588069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.588116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.588322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.588349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.588505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.588531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.588829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.588897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.589222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.589280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.589480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.589507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.589724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.589780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.590058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.590114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.590306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.590339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.590485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.590511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.590665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.590692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.590937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.590988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.591283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.591344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.591513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.591542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.591812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.591862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.592037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.592063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.592233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.592276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.592540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.592590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.592868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.592917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.593161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.593211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.593392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.593419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.593664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.593715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.593977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.594025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.594197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.594222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.594365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.594392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.594633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.594687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.595012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.595065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.595237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.595265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.595432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.595460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.595730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.595789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.596053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.596105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.596298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.596332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.596530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.596556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.596751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.596776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.596948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.596974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.597163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.597205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.597395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.597439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.597671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.597725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.598038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.598092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.598281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.458 [2024-07-12 16:02:55.598308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.458 qpair failed and we were unable to recover it. 00:26:26.458 [2024-07-12 16:02:55.598482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.598510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.598792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.598841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.599096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.599146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.599366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.599393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.599551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.599576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.599705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.599731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.599892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.599918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.600066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.600095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.600261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.600289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.600607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.600662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.600974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.601029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.601165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.601193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.601399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.601457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.601683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.601708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.601863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.601890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.602049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.602090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.602225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.602253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.602480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.602533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.602841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.602899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.603158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.603210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.603429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.603480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.603642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.603669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.603799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.603826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.604070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.604121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.604289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.604323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.604552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.604606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.604873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.604924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.605233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.605298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.459 [2024-07-12 16:02:55.605456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.459 [2024-07-12 16:02:55.605483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.459 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.605759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.605817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.606016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.606042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.606197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.606223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.606368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.606394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.606632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.606686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.606925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.606976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.607145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.607172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.607407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.607464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.607771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.607829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.608061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.608113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.608281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.608310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.608487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.608514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.608790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.608841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.609156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.609208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.609391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.609419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.609670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.609724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.610027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.610085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.610234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.610262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.610451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.610480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.610625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.610651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.610806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.610832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.611140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.611202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.611425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.611478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.611680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.611706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.611867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.611893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.612068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.612095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.612285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.612313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.612588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.612648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.612914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.612968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.613313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.613373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.613509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.613538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.613739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.613767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.614063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.614119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.614288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.614320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.614470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.614499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.614760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.614814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.614950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.614979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.615179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.615205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.460 qpair failed and we were unable to recover it. 00:26:26.460 [2024-07-12 16:02:55.615396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.460 [2024-07-12 16:02:55.615451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.615700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.615753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.616010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.616061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.616260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.616288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.616446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.616474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.616682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.616708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.616891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.616917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.617128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.617186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.617388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.617444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.617758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.617815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.618081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.618132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.618333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.618360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.618516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.618542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.618703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.618746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.618979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.619032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.619201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.619230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.619404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.619433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.619641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.619702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.620013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.620068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.620211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.620239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.620461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.620488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.620643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.620669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.620971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.621028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.621195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.621223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.621402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.621428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.621611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.621674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.621950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.621999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.622205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.622231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.622361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.622387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.622566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.622655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.622854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.622880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.623027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.623052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.623187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.623213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.623374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.623418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.623733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.623793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.624099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.624159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.624330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.461 [2024-07-12 16:02:55.624357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.461 qpair failed and we were unable to recover it. 00:26:26.461 [2024-07-12 16:02:55.624496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.624522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.624674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.624716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.624953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.624983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.625167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.625193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.625351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.625378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.625510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.625537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.625844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.625910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.626084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.626112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.626270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.626297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.626576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.626629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.626885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.626938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.627220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.627271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.627540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.627593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.627907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.627973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.628222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.628249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.628409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.628437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.628570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.628596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.628749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.628776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.628900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.628926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.629125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.629153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.629346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.629375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.629599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.629660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.629968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.630028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.630210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.630238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.630437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.630464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.630623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.630649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.630821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.630847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.631026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.631053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.631230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.631257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.631598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.631653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.631884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.631912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.632065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.632093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.632268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.632298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.632460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.632486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.632640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.632670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.632864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.632892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.633075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.633102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.462 [2024-07-12 16:02:55.633282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.462 [2024-07-12 16:02:55.633308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.462 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.633495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.633521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.633779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.633805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.634013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.634067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.634232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.634259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.634522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.634572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.634843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.634892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.635181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.635235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.635403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.635431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.635744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.635804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.636139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.636192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.636337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.636366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.636628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.636683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.637003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.637063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.637234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.637262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.637461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.637488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.637640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.637667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.637940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.637988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.638193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.638221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.638396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.638422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.638619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.638685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.638839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.638866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.639031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.639057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.639183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.639209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.639462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.639514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.639788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.639839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.640021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.640047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.640202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.640228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.640496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.640549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.640772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.640822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.641076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.641126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.641333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.641359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.641515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.641543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.641706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.641747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.642025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.642086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.642252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.642285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.642496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.642523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.642701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.642727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.642906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.642932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.463 [2024-07-12 16:02:55.643059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.463 [2024-07-12 16:02:55.643085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.463 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.643223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.643251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.643523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.643576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.643901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.643959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.644126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.644154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.644354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.644382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.644620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.644673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.644907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.644936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.645239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.645292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.645478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.645505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.645645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.645672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.645928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.645980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.646164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.646191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.646447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.646503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.646749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.646802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.647011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.647069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.647246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.647271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.647429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.647458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.647742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.647798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.648063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.648108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.648305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.648341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.648511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.648539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.648790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.648841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.649136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.649197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.649359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.649388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.649525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.649554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.649831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.649885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.650145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.650196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.650337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.650366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.650572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.650626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.650805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.650831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.650973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.464 [2024-07-12 16:02:55.650999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.464 qpair failed and we were unable to recover it. 00:26:26.464 [2024-07-12 16:02:55.651149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.651175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.651421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.651473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.651642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.651669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.651949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.651998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.652137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.652169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.652307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c693f0 is same with the state(5) to be set 00:26:26.465 [2024-07-12 16:02:55.652538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.652578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.652724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.652752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.653006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.653067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.653300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.653336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.653494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.653520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.653655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.653681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.653931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.653989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.654277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.654302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.654469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.654494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.654674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.654701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.654907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.654966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.655255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.655339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.655552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.655581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.655801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.655826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.655960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.655985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.656241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.656267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.656397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.656424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.656557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.656583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.656768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.656794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.656930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.656978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.657263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.657333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.657519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.657548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.657827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.657882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.658211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.658267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.658530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.658559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.658744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.658812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.659136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.659162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.659350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.659380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.659544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.659571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.659837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.659895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.660232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.660286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.660491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.660518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.465 qpair failed and we were unable to recover it. 00:26:26.465 [2024-07-12 16:02:55.660659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.465 [2024-07-12 16:02:55.660687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.660944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.660999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.661300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.661333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.661472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.661499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.661662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.661690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.661928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.661983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.662254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.662280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.662454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.662498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.662757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.662813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.663125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.663185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.663409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.663437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.663581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.663635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.663934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.663960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.664120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.664146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.664279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.664305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.664495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.664524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.664689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.664717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.664963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.665018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.665392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.665422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.665561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.665589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.665823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.665878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.666241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.666300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.666532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.666560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.666847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.666907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.667209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.667235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.667398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.667424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.667582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.667659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.667941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.668001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.668330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.668389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.668558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.668586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.668885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.668939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.669235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.669292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.669494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.669522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.669768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.669832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.670220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.670298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.670514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.670543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.670813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.670870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.671120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.671146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.671271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.671296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.671497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.466 [2024-07-12 16:02:55.671525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.466 qpair failed and we were unable to recover it. 00:26:26.466 [2024-07-12 16:02:55.671668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.671695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.671896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.671960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.672297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.672335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.672508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.672534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.672651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.672676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.672858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.672884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.673189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.673248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.673604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.673664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.674029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.674084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.674387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.674446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.674786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.674841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.675180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.675234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.675521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.675578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.675885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.675941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.676273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.676340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.676576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.676602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.676759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.676785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.676940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.676990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.677386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.677442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.677726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.677780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.678076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.678102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.678279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.678304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.678437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.678486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.678767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.678825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.679156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.679218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.679574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.679634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.679997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.680055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.680394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.680456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.680757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.680827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.681182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.681241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.681552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.681614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.681981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.682042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.682414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.682474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.682768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.682798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.682988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.683049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.683375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.683437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.683762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.683822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.684152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.684212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.684572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.467 [2024-07-12 16:02:55.684633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.467 qpair failed and we were unable to recover it. 00:26:26.467 [2024-07-12 16:02:55.684994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.685053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.685326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.685352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.685478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.685504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.685632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.685658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.685930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.685990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.686353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.686421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.686715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.686775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.687105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.687165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.687524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.687585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.687911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.687976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.688369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.688446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.688830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.688895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.689242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.689301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.689687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.689746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.690112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.690176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.690533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.690597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.690988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.691052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.691435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.691512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.691864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.691923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.692192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.692218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.692414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.692441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.692769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.692798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.692975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.693002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.693297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.693377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.693734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.693799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.694187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.694250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.694622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.694686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.695032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.695095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.695493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.695558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.468 [2024-07-12 16:02:55.695951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.468 [2024-07-12 16:02:55.696014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.468 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.696399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.696464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.696754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.696780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.697000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.697064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.697429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.697495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.697891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.697956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.698309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.698397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.698815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.698882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.699229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.699292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.699676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.699738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.700094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.700161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.700565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.700631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.700995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.701058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.701460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.701526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.701914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.701980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.702326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.702401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.702761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.702825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.703150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.703214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.703582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.703650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.704010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.704075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.704444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.704513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.704920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.704985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.705347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.705414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.705784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.705849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.706205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.706269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.706607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.706673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.706970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.706996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.707151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.707176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.707366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.707432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.469 [2024-07-12 16:02:55.707728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.469 [2024-07-12 16:02:55.707754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.469 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.707914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.707939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.708097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.708136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.708493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.708569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.708927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.708993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.709353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.709418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.709730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.709797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.710186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.710250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.710629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.710694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.711047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.711110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.711423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.711491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.711857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.711922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.712267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.712359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.712685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.712711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.712911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.712936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.713124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.713149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.713465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.713532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.713862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.713928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.714287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.714380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.714710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.714775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.715155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.715218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.715524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.715551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.715703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.715728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.715971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.716034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.716435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.716500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.716850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.716916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.717253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.717340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.717714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.717781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.718175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.718240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.718719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.718820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.719200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.719272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.470 [2024-07-12 16:02:55.719542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.470 [2024-07-12 16:02:55.719570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.470 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.719711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.719753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.720043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.720110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.720502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.720570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.720966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.721031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.721364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.721432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.721788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.721851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.722121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.722146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.722327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.722353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.722511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.722536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.722712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.722738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.723012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.723078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.723489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.723566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.723884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.723909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.724065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.724091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.724339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.724408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.724813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.724879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.725240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.725303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.725664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.725729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.726082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.726149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.726541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.726607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.727003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.727069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.727407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.727434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.727625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.727651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.728033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.728098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.728457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.728522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.728871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.728935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.729333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.729399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.729804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.729868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.730214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.730278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.471 [2024-07-12 16:02:55.730689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.471 [2024-07-12 16:02:55.730755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.471 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.731119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.731184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.731521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.731588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.731987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.732051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.732422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.732489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.732877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.732941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.733286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.733365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.733688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.733714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.733858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.733883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.734195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.734260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.734635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.734700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.735003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.735028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.735193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.735220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.735529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.735595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.735915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.735941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.736352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.736433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.736825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.736902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.737288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.737374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.737769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.737833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.738165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.738190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.738373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.738399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.738643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.738669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.738828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.738858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.739016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.739042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.739237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.739264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.739444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.739471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.739799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.739867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.740261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.740356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.740696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.740760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.741151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.741215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.472 qpair failed and we were unable to recover it. 00:26:26.472 [2024-07-12 16:02:55.741558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.472 [2024-07-12 16:02:55.741584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.741711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.741737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.741989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.742053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.742402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.742468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.742805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.742870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.743270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.743348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.743635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.743676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.743800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.743826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.743984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.744031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.744393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.744459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.744820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.744884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.745201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.745268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.745640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.745707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.745976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.746002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.746268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.746346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.746751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.746816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.747170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.747238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.747580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.747606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.747763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.747789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.748002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.748070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.748405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.748472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.748857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.748922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.749218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.749244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.749388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.749415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.749699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.749763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.750117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.750181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.750546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.750611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.751021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.751086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.751387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.751413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.751571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.751597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.751862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.473 [2024-07-12 16:02:55.751928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.473 qpair failed and we were unable to recover it. 00:26:26.473 [2024-07-12 16:02:55.752277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.752303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.752678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.752750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.753105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.753168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.753544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.753611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.753966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.754031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.754322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.754348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.754504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.754529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.754751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.754816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.755165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.755228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.755661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.755729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.756081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.756145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.756552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.756618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.756989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.757053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.757362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.757388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.757549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.757574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.757864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.757932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.758340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.758407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.758722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.758785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.759175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.759241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.759628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.759695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.760082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.760147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.760510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.760575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.760914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.474 [2024-07-12 16:02:55.760981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.474 qpair failed and we were unable to recover it. 00:26:26.474 [2024-07-12 16:02:55.761349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.761438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.761794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.761861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.762225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.762285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.762622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.762690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.763083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.763149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.763491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.763558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.763965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.764030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.764390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.764456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.764812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.764877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.765264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.765346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.765713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.765778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.766148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.766211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.766585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.766651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.767009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.767073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.767477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.767541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.767922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.767987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.768387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.768455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.768820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.768886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.769236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.769313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.769664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.769731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.770056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.770124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.770486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.770552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.770937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.771001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.771346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.771414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.771818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.771882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.772268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.772360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.772676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.772741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.773053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.773119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.773502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.773568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.773915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.773983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.475 qpair failed and we were unable to recover it. 00:26:26.475 [2024-07-12 16:02:55.774273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.475 [2024-07-12 16:02:55.774299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.774462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.774488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.774775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.774839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.775220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.775283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.775707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.775774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.776126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.776191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.776553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.776619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.776941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.777009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.777415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.777480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.777828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.777893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.778277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.778359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.778667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.778733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.779118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.779185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.779508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.779574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.779931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.779996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.780398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.780464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.780835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.780899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.781223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.781287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.781691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.781755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.782114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.782178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.782516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.782582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.782935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.782999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.783345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.783413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.783870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.783971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.784369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.784441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.784784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.784851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.785215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.785283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.785615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.785678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.786030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.786107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.786477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.786543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.786912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.786978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.787346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.787413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.787810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.787876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.788264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.476 [2024-07-12 16:02:55.788345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.476 qpair failed and we were unable to recover it. 00:26:26.476 [2024-07-12 16:02:55.788703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.788767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.789100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.789126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.789274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.789299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.789598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.789664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.789996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.790063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.790436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.790505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.790905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.790970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.791367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.791433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.791812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.791877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.792264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.792350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.792684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.792748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.793108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.793171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.793552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.793619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.793973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.794041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.794401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.794467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.794753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.794779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.794910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.794936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.795071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.795099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.795452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.795518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.795914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.795978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.796314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.796395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.796817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.796882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.797271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.797350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.797712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.797776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.798092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.798156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.798528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.798594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.798941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.799006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.799325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.799351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.799530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.799556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.799871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.799936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.800291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.477 [2024-07-12 16:02:55.800369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.477 qpair failed and we were unable to recover it. 00:26:26.477 [2024-07-12 16:02:55.800747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.800811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.801138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.801202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.801541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.801609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.801950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.802026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.802416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.802482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.802847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.802911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.803249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.803313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.803687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.803751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.804070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.804133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.804452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.804519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.804842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.804908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.805272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.805352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.805716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.805780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.806063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.806089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.806285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.806311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.806681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.806745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.807071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.807097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.807258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.807284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.807472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.807538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.807926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.807990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.808308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.808386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.808738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.808802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.809214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.809279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.809682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.809747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.810049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.810116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.810454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.810520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.810906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.810970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.811351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.811417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.811767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.811834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.812203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.812270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.812625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.812693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.813080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.813145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.813470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.478 [2024-07-12 16:02:55.813538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.478 qpair failed and we were unable to recover it. 00:26:26.478 [2024-07-12 16:02:55.813865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.813930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.814368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.814435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.814786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.814852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.815239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.815304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.815649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.815715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.816103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.816168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.816490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.816556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.816867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.816934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.817262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.817349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.817705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.817770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.818126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.818201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.818534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.818601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.818973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.819037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.819355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.819426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.819760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.819821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.820203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.820267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.820637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.820703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.821096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.821161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.821441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.821468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.821628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.821654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.821952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.822017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.822429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.822494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.822842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.822908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.823249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.823344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.823763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.823830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.824181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.824247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.824596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.824663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.825020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.825085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.825440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.825507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.825870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.825935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.826335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.826399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.826718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.826782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.827148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.827214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.827633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.827701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.828106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.479 [2024-07-12 16:02:55.828172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.479 qpair failed and we were unable to recover it. 00:26:26.479 [2024-07-12 16:02:55.828532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.828600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.828988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.829053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.829396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.829464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.829818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.829884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.830232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.830300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.830739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.830808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.831169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.831234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.831584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.831650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.832037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.832102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.832475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.832541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.832897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.832962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.833330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.833398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.833802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.833868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.834264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.834343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.834745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.834811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.835117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.835194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.835530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.835597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.835962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.836027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.836424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.836490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.836844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.836908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.837254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.837332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.837666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.837730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.838087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.838154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.838504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.838573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.838936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.839002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.839368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.839434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.839817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.480 [2024-07-12 16:02:55.839882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.480 qpair failed and we were unable to recover it. 00:26:26.480 [2024-07-12 16:02:55.840238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.840264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.840422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.840449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.840762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.840826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.841140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.841206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.841621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.841688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.842089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.842154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.842484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.842551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.842907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.842972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.843338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.843404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.843763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.843827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.844160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.844224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.844587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.844651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.845043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.845108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.845469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.845532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.845928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.845993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.846351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.846416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.846780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.846845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.847233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.847296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.847639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.847704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.848062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.848126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.848433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.848500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.848855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.848921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.849268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.849348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.849739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.849803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.850189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.850253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.850583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.850650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.851010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.851075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.851395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.851463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.851825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.851900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.852292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.852372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.852721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.852786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.853164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.481 [2024-07-12 16:02:55.853229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.481 qpair failed and we were unable to recover it. 00:26:26.481 [2024-07-12 16:02:55.853662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.853731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.854094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.854159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.854554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.854618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.855013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.855077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.855418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.855484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.855830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.855897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.856294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.856389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.856743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.856811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.857137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.857541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.857607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.857932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.857998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.858335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.858400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.858785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.858850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.859220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.859284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.859665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.859729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.860081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.860146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.860535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.860600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.860964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.861031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.861403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.861469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.861828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.861892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.862239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.862303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.862742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.862810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.863211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.863275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.863661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.863726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.864102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.864167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.864535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.864600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.864950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.865017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.865349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.865439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.865799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.865864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.866219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.866286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.866658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.866726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.867089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.867154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.867553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.867618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.867963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.482 [2024-07-12 16:02:55.868029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.482 qpair failed and we were unable to recover it. 00:26:26.482 [2024-07-12 16:02:55.868396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.868463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.868773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.868837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.869192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.869266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.869608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.869674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.870025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.870090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.870429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.870494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.870851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.870916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.871269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.871347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.871728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.871793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.872145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.872210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.872641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.872709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.873039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.873106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.873510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.873575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.873901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.873966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.874339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.874406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.874762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.874826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.875206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.875273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.875652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.875718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.876106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.876171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.876568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.876633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.876962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.877027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.877386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.877452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.877774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.877840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.878190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.878256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.878626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.878689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.879010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.879077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.879448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.879513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.879919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.879983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.880348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.880414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.880824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.880891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.881255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.881347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.881718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.881782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.882167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.882232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.882620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.882686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.483 [2024-07-12 16:02:55.883007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.483 [2024-07-12 16:02:55.883074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.483 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.883476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.883541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.883912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.883976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.884345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.884411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.884781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.884849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.885210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.885276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.885640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.885704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.886049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.886113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.886520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.886597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.886972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.887037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.887393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.887458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.887812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.887878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.888241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.888719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.888787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.889189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.889254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.889638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.889706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.890099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.890164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.890521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.890587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.890946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.891011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.891414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.891482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.891885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.891948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.892299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.892379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.892712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.892777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.893165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.893230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.893560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.893627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.894018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.894082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.894419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.894484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.894882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.894946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.895303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.895381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.895777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.895843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.896174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.896241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.896590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.484 [2024-07-12 16:02:55.896654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.484 qpair failed and we were unable to recover it. 00:26:26.484 [2024-07-12 16:02:55.897017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.897082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.897407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.897473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.897855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.897921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.898285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.898366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.898716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.898782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.899154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.899217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.899597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.899663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.900047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.900112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.900455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.900521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.900837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.900904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.901246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.901313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.901690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.901758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.902087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.902152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.902549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.902614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.902971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.903036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.903397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.903462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.903788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.903852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.904174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.904239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.904621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.904686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.905045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.905110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.905476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.905541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.905909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.905974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.906329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.906395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.906707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.906774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.907163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.907228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.907638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.907705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.908068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.908133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.908500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.908565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.908908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.908970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.909310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.909388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.909728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.909797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.910170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.910235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.910636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.910702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.911057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.911121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.911436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.911501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.911898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.911963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.485 qpair failed and we were unable to recover it. 00:26:26.485 [2024-07-12 16:02:55.912349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.485 [2024-07-12 16:02:55.912413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.912776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.912841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.913193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.913261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.913616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.913682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.914049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.914115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.914444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.914513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.914855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.914920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.915266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.915355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.915757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.915822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.916181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.916242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.916610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.916678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.917073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.917137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.917498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.917562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.917955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.918019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.918370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.918437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.918841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.918905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.919264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.919353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.919718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.919781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.920145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.920209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.920574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.920641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.921023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.921087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.921419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.921485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.921852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.921916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.922263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.922345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.922736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.922801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.923119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.923185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.923547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.923613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.923974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.924041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.924434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.924499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.924855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.924919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.925266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.925342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.925712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.925775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.926122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.926188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.926549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.926615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.927002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.927066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.927391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.927459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.927794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.927860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.928237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.928301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.928688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.928754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.929071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.929140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.929473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.929539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.929927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.929991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.930292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.930374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.930693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.930756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.931149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.931214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.931605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.931670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.931996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.932060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.932405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.932480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.486 [2024-07-12 16:02:55.932880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.486 [2024-07-12 16:02:55.932945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.486 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.933250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.933328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.933688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.933753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.934106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.934170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.934534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.934602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.934924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.934991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.935394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.935460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.935850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.935915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.936249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.936343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.936735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.936810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.937189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.937259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.937654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.937722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.938119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.938183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.938544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.938611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.938992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.939057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.939412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.939478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.939847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.939913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.940279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.940357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.940673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.940738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.941099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.941164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.941479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.941544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.941930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.941995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.942410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.942476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.942827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.942891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.943248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.943311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.943661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.943725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.944133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.944197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.944526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.944591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.944942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.945008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.945403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.945467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.945844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.945909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.946295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.946373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.946699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.946767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.947087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.947154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.947489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.947557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.947932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.947997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.948334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.948400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.948751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.948814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.949131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.949198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.949572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.949648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.950004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.950068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.950398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.950466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.950786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.950850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.951233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.951297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.951670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.951737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.952060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.952127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.952529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.952595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.952996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.953060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.953449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.953514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.487 [2024-07-12 16:02:55.953871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.487 [2024-07-12 16:02:55.953933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.487 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.954330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.954397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.954817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.954884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.955247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.955311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.955691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.955759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.956104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.956169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.956520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.956587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.956973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.957038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.957400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.957465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.957825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.957889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.958286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.958376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.958777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.958842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.959204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.959268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.959704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.959773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.960128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.960196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.960591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.960657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.961010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.961075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.961479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.961545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.961906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.961970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.962337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.962403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.962798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.962862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.963185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.963253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.963668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.963735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.964095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.964160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.964514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.964582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.964938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.965002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.965391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.965456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.965759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.965826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.966183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.966247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.966603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.966670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.967052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.967126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.967528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.967594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.967902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.967965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.968339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.968405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.968801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.968867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.969257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.969333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.969689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.969756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.970148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.970213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.970595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.970661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.971046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.971111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.971496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.971562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.971956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.972021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.972357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.972422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.972778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.972843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.973211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.488 [2024-07-12 16:02:55.973278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.488 qpair failed and we were unable to recover it. 00:26:26.488 [2024-07-12 16:02:55.973677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.973743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.974093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.974159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.974520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.974587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.974944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.975011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.975350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.975416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.975780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.975847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.976183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.976248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.976615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.976683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.977013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.977077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.977441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.977507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.977860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.977927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.978342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.978409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.978737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.978801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.979151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.979215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.979554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.979621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.979976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.980041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.980357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.980422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.980778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.980843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.981195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.981260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.981605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.981671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.982030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.982095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.982419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.982484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.982852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.982916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.983272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.983353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.983712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.983779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.984100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.984175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.984528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.984593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.984930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.984995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.985393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.985457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.985851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.985916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.986245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.986311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.489 [2024-07-12 16:02:55.986696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.489 [2024-07-12 16:02:55.986760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.489 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.987139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.987204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.987541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.987607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.987916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.987980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.988369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.988435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.988755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.988822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.989152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.989216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.989603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.989669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.989999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.990064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.990384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.990452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.990841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.990905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.991277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.991356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.991677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.991741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.992107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.992172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.992512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.992578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.992979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.993044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.993418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.993484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.993820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.993884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.994208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.994272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.994632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.994697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.995093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.995157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.995526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.995591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.995917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.995981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.996343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.996409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.996764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.996828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.997210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.997275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.997627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.997692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.998055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.998121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.998508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.998574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.998927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.998992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.999342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.999407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:55.999753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:55.999816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.000208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.000273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.000612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.000678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.001015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.001089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.001486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.001552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.001940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.002005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.002337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.002403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.002862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.002962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.003357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.003430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.003774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.003843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.004218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.004286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.004669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.004734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.490 [2024-07-12 16:02:56.005100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.490 [2024-07-12 16:02:56.005166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.490 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.005500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.005568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.005918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.005982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.006345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.006414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.006773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.006839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.007216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.007290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.007672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.007739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.008112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.008178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.008505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.008571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.008908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.008973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.009338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.009404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.009765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.009830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.010233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.010298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.010651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.010718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.011111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.011176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.011537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.011604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.011921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.011987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.012388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.012454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.012854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.012920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.013271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.013366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.013757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.013822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.014216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.014280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.014634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.014701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.015096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.015161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.015518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.015584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.015933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.016000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.016360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.016426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.016771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.016834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.017228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.017293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.017645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.017713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.018078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.018144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.018466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.018543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.018904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.018969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.019342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.019412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.019775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.019840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.020198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.020263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.020641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.020707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.021066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.021133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.021490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.021557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.021917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.021981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.022291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.022374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.022748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.022814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.023172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.491 [2024-07-12 16:02:56.023236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.491 qpair failed and we were unable to recover it. 00:26:26.491 [2024-07-12 16:02:56.023567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.023634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.024025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.024090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.024463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.024529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.024916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.024980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.025368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.025435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.025752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.025819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.026145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.026211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.026612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.026678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.027045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.027109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.027465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.027531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.027907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.027971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.028361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.028426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.028789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.028857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.029211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.029278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.029635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.029702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.030115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.030181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.030549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.030615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.030973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.031038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.031409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.031476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.031859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.031922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.032335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.032401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.032756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.032821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.033170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.033235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.033565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.033630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.033980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.034043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.034389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.034454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.034807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.034874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.035225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.035292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.035712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.035790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.036193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.036257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.036621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.036688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.037041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.037106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.037436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.037502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.037813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.037879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.038231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.038297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.038740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.038807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.039209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.039272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.039654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.039720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.040086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.040150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.040465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.040532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.040926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.040992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.041392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.041458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.041789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.041856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.042208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.042273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.492 qpair failed and we were unable to recover it. 00:26:26.492 [2024-07-12 16:02:56.042643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.492 [2024-07-12 16:02:56.042709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.043067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.043133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.043489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.043556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.043932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.043996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.044405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.044472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.044833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.044897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.045265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.045351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.045720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.045784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.046104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.046172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.046532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.046598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.046944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.047008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.047436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.047505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.047858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.047922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.048300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.048379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.048743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.048808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.049119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.049186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.049578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.049644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.050004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.050069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.050466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.050532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.050930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.050995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.051378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.051444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.051843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.051909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.052292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.052373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.052795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.052863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.053252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.053356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.053738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.053804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.054155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.054220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.054597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.054666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.055022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.055090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.055451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.055516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.055926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.055991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.056388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.056455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.056817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.056884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.057278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.057360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.057728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.057791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.058176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.493 [2024-07-12 16:02:56.058242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.493 qpair failed and we were unable to recover it. 00:26:26.493 [2024-07-12 16:02:56.058614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.058681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.059068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.059132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.059540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.059607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.059968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.060035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.060365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.060430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.060788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.060853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.061166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.061233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.061648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.061716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.062072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.062140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.062506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.062572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.062916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.062981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.063366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.063432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.063787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.063853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.064203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.064267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.064684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.064751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.065152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.065218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.065652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.065721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.066082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.066147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.066472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.066537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.066870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.066937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.067344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.067411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.067809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.067875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.068261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.068339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.068657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.068724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.069125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.069190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.069521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.069590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.069959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.070024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.070356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.070423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.070790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.070865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.071213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.071278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.494 [2024-07-12 16:02:56.071655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.494 [2024-07-12 16:02:56.071719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.494 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.072112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.072176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.072575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.072642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.072998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.073062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.073418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.073505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.073876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.073941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.074352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.074418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.074720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.074785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.075138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.075202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.075543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.075609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.075953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.076017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.076417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.076482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.076809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.076877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.077195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.077263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.077654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.077720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.078122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.078187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.078544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.078610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.078972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.079037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.079417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.079484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.079854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.079919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.080287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.080368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.080788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.080855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.081251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.081343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.081711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.081776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.082173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.082237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.082633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.082699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.083067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.083136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.083480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.083546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.083938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.084003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.084394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.084459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.084804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.084869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.085256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.085334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.085695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.085762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.086148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.086213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.086575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.086643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.087052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.087117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.087476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.087542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.087871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.087937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.088290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.088380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.088747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.088813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.089163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.495 [2024-07-12 16:02:56.089227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.495 qpair failed and we were unable to recover it. 00:26:26.495 [2024-07-12 16:02:56.089608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.089676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.090037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.090102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.090501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.090567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.090913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.090978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.091362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.091427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.091767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.091832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.092229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.092294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.092666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.092734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.093125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.093189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.093598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.093665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.094060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.094125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.094527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.094593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.094945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.095009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.095407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.095471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.095858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.095922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.096284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.096383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.096745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.096814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.097148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.097213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.097619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.097685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.098012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.098076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.098465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.098531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.098927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.098992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.099406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.099471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.099835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.099899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.100272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.100353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.100696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.100763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.101127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.101192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.101580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.101647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.102005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.102069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.102473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.102538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.102943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.103006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.103374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.103439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.103789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.103853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.104166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.104232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.104621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.104688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.105060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.105124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.105479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.105545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.105890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.105964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.106367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.106432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.106754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.106821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.496 qpair failed and we were unable to recover it. 00:26:26.496 [2024-07-12 16:02:56.107209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.496 [2024-07-12 16:02:56.107272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.107644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.107710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.108040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.108103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.108458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.108523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.108864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.108928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.109277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.109356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.109718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.109782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.110132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.110197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.110560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.110627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.110952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.111017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.111389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.111455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.111812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.111876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.112262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.112352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.112718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.112782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.113169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.113234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.113590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.113655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.114006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.114073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.114437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.114504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.114864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.114927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.115267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.115344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.497 [2024-07-12 16:02:56.115702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.497 [2024-07-12 16:02:56.115766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.497 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.116157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.116222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.116626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.116692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.117074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.117139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.117498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.117566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.117954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.118020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.118425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.118492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.118846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.118910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.119293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.119382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.119728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.119793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.120145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.120209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.120579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.120645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.121000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.121065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.121422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.121486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.121830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.121894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.122278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.122359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.122721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.122786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.123127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.123193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.123533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.123599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.123965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.124030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.124356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.124421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.124781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.124846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.125208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.125274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.125676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.125744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.126147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.126212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.126691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.126792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.127184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.127255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.127636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.127703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.779 qpair failed and we were unable to recover it. 00:26:26.779 [2024-07-12 16:02:56.128033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.779 [2024-07-12 16:02:56.128099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.128450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.128517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.128914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.128980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.129391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.129458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.129807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.129872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.130223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.130288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.130674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.130738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.131123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.131187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.131550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.131628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.132023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.132086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.132489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.132554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.132929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.132996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.133345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.133420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.133766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.133832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.134196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.134264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.134644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.134711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.135102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.135184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.135568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.135634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.136033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.136098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.136533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.136618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.137000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.137083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.137454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.137522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.137884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.137949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.138345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.138409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.138744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.138810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.139202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.139268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.139603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.139671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.140028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.140092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.140478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.140545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.140856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.140922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.141299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.141380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.141748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.141812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.142187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.142252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.142556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.142590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.142793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.142827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.143027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.143060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.143265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.143299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.143511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.143545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.143744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.143778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.780 qpair failed and we were unable to recover it. 00:26:26.780 [2024-07-12 16:02:56.143978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.780 [2024-07-12 16:02:56.144012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.144202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.144237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.144442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.144478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.144676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.144709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.144878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.144912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.145105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.145139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.145383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.145416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.145583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.145615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.145792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.145823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.146013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.146045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.146262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.146295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.146499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.146531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.146695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.146728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.146916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.146947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.147141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.147173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.147376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.147410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.147583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.147615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.147784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.147821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.148019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.148051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.148243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.148281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.148486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.148519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.148844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.148908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.149255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.149341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.149526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.149558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.149715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.149749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.150129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.150161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.150481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.150513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.150847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.150910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.151310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.151397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.151558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.151607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.151811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.151852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.152082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.152133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.152355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.152405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.152611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.152648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.152831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.152868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.153110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.153146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.153359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.153410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.153596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.153646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.153824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.781 [2024-07-12 16:02:56.153861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.781 qpair failed and we were unable to recover it. 00:26:26.781 [2024-07-12 16:02:56.154093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.154131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.154397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.154430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.154651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.154688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.154894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.154931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.155142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.155179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.155416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.155450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.155622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.155654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.155906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.155943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.156186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.156224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.156420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.156454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.156619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.156654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.156851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.156884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.157119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.157156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.157383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.157418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.157590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.157643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.157828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.157865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.158052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.158089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.158308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.158373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.158563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.158621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.158816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.158854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.159072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.159108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.159293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.159347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.159538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.159572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.159817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.159850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.160053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.160090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.160363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.160396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.160612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.160655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.160867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.160904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.161113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.161151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.161438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.161472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.161714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.161751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.161968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.162004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.162252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.162290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.162525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.162557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.162788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.162825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.163063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.163100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.163310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.163377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.163607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.163644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.782 qpair failed and we were unable to recover it. 00:26:26.782 [2024-07-12 16:02:56.163863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.782 [2024-07-12 16:02:56.163900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.164110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.164146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.164365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.164398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.164594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.164626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.164851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.164889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.165101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.165138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.165353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.165391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.165607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.165656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.165903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.165935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.166111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.166149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.166367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.166405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.166602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.166639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.166857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.166894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.167135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.167172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.167358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.167397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.167645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.167683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.167864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.167901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.168084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.168123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.168341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.168380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.168569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.168607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.168841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.168884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.169070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.169108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.169330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.169368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.169604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.169641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.169880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.169917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.170119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.783 [2024-07-12 16:02:56.170155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.783 qpair failed and we were unable to recover it. 00:26:26.783 [2024-07-12 16:02:56.170362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.170401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.170616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.170654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.170857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.170893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.171104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.171140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.171353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.171391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.171587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.171623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.171797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.171834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.172053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.172092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.172347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.172385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.172594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.172631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.172878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.172915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.173125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.173163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.173374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.173412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.173590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.173633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.173872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.173910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.174098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.174132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.174305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.174346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.174551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.174590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.174795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.174832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.175078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.175115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.175304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.175350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.175546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.175583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.175767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.175804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.176013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.176045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.176224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.176262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.176528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.176566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.176754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.176791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.177027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.177064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.177311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.177382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.177604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.177649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.177866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.177899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.178116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.178167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.178401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.178440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.178681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.178718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.178939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.178982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.179194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.179230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.179421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.179458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.784 [2024-07-12 16:02:56.179679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.784 [2024-07-12 16:02:56.179718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.784 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.179954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.179991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.180206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.180243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.180446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.180485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.180708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.180746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.180972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.181009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.181261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.181298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.181533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.181571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.181755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.181791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.182037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.182074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.182257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.182294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.182543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.182581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.182787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.182825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.183044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.183081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.183302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.183351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.183565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.183603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.183843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.183880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.184119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.184156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.184380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.184414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.184601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.184633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.184856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.184892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.185136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.185173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.185419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.185457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.185700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.185736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.185965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.186002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.186244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.186275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.186501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.186538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.186762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.186799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.187008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.187043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.187251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.187285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.187543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.187601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.187807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.187848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.188092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.188131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.188305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.188353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.188603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.188641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.188885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.188923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.189142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.189180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.785 qpair failed and we were unable to recover it. 00:26:26.785 [2024-07-12 16:02:56.189390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.785 [2024-07-12 16:02:56.189436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.189631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.189665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.189918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.189957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.190176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.190214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.190391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.190429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.190679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.190713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.190902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.190936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.191157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.191191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.191380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.191416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.191616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.191651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.191881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.191916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.192149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.192183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.192410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.192445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.192648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.192681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.192856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.192889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.193075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.193108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.193291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.193335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.193545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.193591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.193842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.193885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.194189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.194249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.194476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.194519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.194758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.194800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.195068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.195126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.195325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.195368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.195587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.195641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.195895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.195953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.196187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.196245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.196506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.196565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.196840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.196898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.197144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.197204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.197429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.197493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.197805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.197866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.198134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.198186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.198468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.198531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.198778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.198840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.199055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.199116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.199374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.199418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.199660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.199717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.199930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.199988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.786 [2024-07-12 16:02:56.200199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.786 [2024-07-12 16:02:56.200241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.786 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.200492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.200558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.200799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.200855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.201087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.201145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.201380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.201440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.201665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.201723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.201931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.201974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.202217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.202267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.202500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.202558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.202780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.202837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.203046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.203087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.203284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.203336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.203561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.203602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.203815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.203857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.204082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.204124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.204350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.204394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.204610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.204653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.204896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.204939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.205157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.205199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.205413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.205456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.205666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.205707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.205928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.205969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.206201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.206240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.206445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.206489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.206685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.206726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.206910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.206950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.207157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.207197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.207451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.207494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.207698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.207745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.207962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.207994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.208181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.208211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.208410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.208443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.208626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.208656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.208867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.208897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.209063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.209093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.209277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.209307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.209505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.209535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.209720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.209752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.209925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.209956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.210112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.210143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.210327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.210358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.787 qpair failed and we were unable to recover it. 00:26:26.787 [2024-07-12 16:02:56.210549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.787 [2024-07-12 16:02:56.210578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.210733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.210763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.210914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.210945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.211146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.211175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.211329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.211359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.211554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.211583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.211758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.211787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.211933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.211961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.212138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.212170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.212326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.212356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.212532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.212561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.212745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.212773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.212972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.213001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.213147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.213177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.213337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.213368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.213565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.213594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.213773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.213802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.213954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.213983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.214181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.214210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.214426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.214454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.214646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.214674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.214835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.214863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.215010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.215038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.215210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.215239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.215419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.215448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.215615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.215643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.215843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.215871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.216007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.216035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.216213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.216241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.216417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.216444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.216588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.216616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.216806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.216833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.216994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.217021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.217159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.217186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.217353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.217382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.217537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.788 [2024-07-12 16:02:56.217564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.788 qpair failed and we were unable to recover it. 00:26:26.788 [2024-07-12 16:02:56.217699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.217727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.217888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.217916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.218050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.218077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.218243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.218270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.218462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.218489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.218650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.218682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.218872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.218899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.219040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.219067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.219230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.219257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.219385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.219413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.219600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.219627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.219791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.219818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.219985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.220011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.220201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.220227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.220399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.220425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.220553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.220579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.220763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.220789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.220948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.220974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.221157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.221183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.221372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.221399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.221560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.221586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.221752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.221777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.221931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.221959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.222095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.222121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.222283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.222309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.222484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.222511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.222648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.222674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.222857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.222883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.223029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.223056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.223217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.223242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.223401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.223428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.223559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.223585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.223753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.223783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.223945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.223970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.224125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.224150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.224305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.224337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.224490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.224515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.224674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.224699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.224933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.224959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.225080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.225105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.225237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.789 [2024-07-12 16:02:56.225262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.789 qpair failed and we were unable to recover it. 00:26:26.789 [2024-07-12 16:02:56.225449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.225475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.225627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.225652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.225786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.225811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.225939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.225964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.226145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.226169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.226352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.226378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.226539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.226564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.226718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.226742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.226901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.226925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.227055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.227081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.227260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.227284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.227455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.227481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.227637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.227662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.227819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.227844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.228000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.228025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.228151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.228176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.228332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.228358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.228503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.228539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.228796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.228824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.228961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.228986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.229165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.229190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.229347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.229373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.229534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.229559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.229715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.229742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.229898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.229923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.230053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.230078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.230215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.230241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.230364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.230390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.230621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.230646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.230773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.230798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.230955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.230981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.231111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.231137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.231295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.231326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.231483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.231508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.231683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.231708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.231860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.231885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.232040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.232065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.232195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.232220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.232397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.232423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.232554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.232579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.232735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.790 [2024-07-12 16:02:56.232760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.790 qpair failed and we were unable to recover it. 00:26:26.790 [2024-07-12 16:02:56.232913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.232938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.233090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.233115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.233269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.233294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.233464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.233490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.233622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.233647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.233825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.233851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.234009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.234034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.234187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.234212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.234360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.234386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.234541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.234566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.234737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.234761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.234909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.234934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.235088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.235115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.235268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.235294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.235435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.235460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.235601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.235627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.235749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.235776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.235930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.235956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.236111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.236140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.236273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.236299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.236477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.236502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.236674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.236699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.236881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.236906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.237062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.237087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.237224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.237250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.237397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.237422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.237574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.237599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.237753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.237778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.237931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.237956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.238110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.238136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.238269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.238294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.238454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.238496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.238670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.238709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.238851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.238879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.239014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.239040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.239196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.239222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.239388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.239415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.239548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.239574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.239753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.239778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.239912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.239937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.240056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.791 [2024-07-12 16:02:56.240081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.791 qpair failed and we were unable to recover it. 00:26:26.791 [2024-07-12 16:02:56.240213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.240238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.240391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.240417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.240543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.240569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.240691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.240716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.240873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.240902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.241061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.241086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.241263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.241289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.241445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.241471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.241597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.241622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.241811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.241836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.241985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.242010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.242139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.242164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.242289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.242313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.242482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.242507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.242642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.242668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.242825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.242850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.242992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.243017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.243174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.243203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.243380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.243408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.243564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.243590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.243742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.243769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.243927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.243953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.244104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.244129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.244258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.244285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.244457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.244482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.244662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.244687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.244838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.244863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.245040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.245065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.245214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.245239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.245393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.245421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.245574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.245601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.245759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.245790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.245970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.245997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.246141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.792 [2024-07-12 16:02:56.246167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.792 qpair failed and we were unable to recover it. 00:26:26.792 [2024-07-12 16:02:56.246297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.246328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.246464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.246491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.246645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.246671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.246824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.246850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.246977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.247003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.247174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.247201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.247358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.247385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.247542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.247569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.247747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.247772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.247954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.247979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.248156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.248181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.248344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.248369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.248523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.248549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.248701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.248726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.248879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.248904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.249056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.249081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.249228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.249253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.249408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.249434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.249613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.249638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.249792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.249817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.249947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.249972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.250126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.250151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.250309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.250338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.250471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.250497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.250630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.250662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.250814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.250839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.250994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.251019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.251157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.251182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.251354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.251381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.251521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.251547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.251701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.251728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.251859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.251886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.252049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.252075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.252248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.252288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.252446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.252485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.252644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.252670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.252829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.252855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.253000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.253025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.793 [2024-07-12 16:02:56.253206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.793 [2024-07-12 16:02:56.253232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.793 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.253362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.253389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.253548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.253574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.253753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.253779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.253935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.253960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.254117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.254143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.254310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.254341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.254470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.254495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.254651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.254676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.254831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.254857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.255028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.255053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.255203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.255228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.255357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.255383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.255539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.255569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.255736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.255761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.255918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.255944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.256100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.256125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.256251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.256278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.256417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.256443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.256598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.256624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.256806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.256832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.256986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.257012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.257160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.257185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.257331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.257371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.257526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.257553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.257707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.257733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.257893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.257919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.258059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.258084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.258324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.258350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.258505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.258530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.258661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.258686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.258862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.258887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.259021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.259046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.259197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.259224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.259356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.259382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.259541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.259566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.259721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.259747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.259900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.259926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.260080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.260106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.260239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.794 [2024-07-12 16:02:56.260265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.794 qpair failed and we were unable to recover it. 00:26:26.794 [2024-07-12 16:02:56.260441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.260480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.260617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.260645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.260800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.260826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.260977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.261004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.261132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.261158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.261308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.261340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.261470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.261496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.261624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.261652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.261781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.261807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.261939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.261965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.262116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.262142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.262297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.262329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.262486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.262512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.262640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.262671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.262855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.262881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.263010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.263038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.263166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.263190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.263308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.263339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.263496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.263521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.263679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.263705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.263838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.263863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.263996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.264021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.264178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.264203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.264379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.264405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.264550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.264575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.264744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.264769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.264896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.264922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.265065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.265091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.265246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.265273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.265437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.265463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.265610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.265635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.265793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.265818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.265974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.265999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.266158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.266184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.266313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.266344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.266508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.266534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.266657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.266683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.266842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.266869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.267048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.267074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.795 qpair failed and we were unable to recover it. 00:26:26.795 [2024-07-12 16:02:56.267219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.795 [2024-07-12 16:02:56.267245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.267410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.267449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.267585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.267612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.267743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.267768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.267900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.267926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.268054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.268080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.268209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.268234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.268373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.268398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.268554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.268580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.268736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.268762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.268915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.268940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.269071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.269096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.269248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.269274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.269420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.269447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.269579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.269605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.269741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.269767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.269927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.269952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.270106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.270133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.270273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.270312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.270465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.270494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.270632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.270658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.270806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.270831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.270990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.271016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.271146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.271171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.271320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.271346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.271495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.271521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.271675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.271701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.271850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.271876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.272012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.272038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.272168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.272195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.272349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.272375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.272506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.272532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.272658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.272684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.272835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.272861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.272991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.273016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.273150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.273176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.273302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.273332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.273489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.273514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.273637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.273663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.273816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.273841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.273968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.796 [2024-07-12 16:02:56.273995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.796 qpair failed and we were unable to recover it. 00:26:26.796 [2024-07-12 16:02:56.274153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.274184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.274359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.274385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.274518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.274544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.274721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.274747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.274899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.274924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.275059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.275085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.275264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.275289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.275440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.275479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.275735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.275774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.275938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.275965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.276102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.276129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.276280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.276305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.276474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.276500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.276626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.276652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.276782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.276808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.276962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.276988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.277136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.277162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.277308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.277340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.277470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.277495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.277629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.277654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.277805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.277830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.277959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.277985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.278124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.278152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.278302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.278335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.278467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.278492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.278657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.278682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.278834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.278859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.279018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.279044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.279212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.279238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.279394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.279420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.279574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.279600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.279755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.279781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.279913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.279939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.280069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.280095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.280232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.797 [2024-07-12 16:02:56.280258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.797 qpair failed and we were unable to recover it. 00:26:26.797 [2024-07-12 16:02:56.280410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.280437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.280570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.280596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.280729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.280756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.280910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.280936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.281090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.281117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.281258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.281302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.281491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.281519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.281665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.281691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.281818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.281845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.281977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.282003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.282130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.282155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.282283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.282309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.282472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.282498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.282620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.282646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.282800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.282825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.282983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.283008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.283141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.283167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.283298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.283329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.283457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.283482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.283665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.283691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.283847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.283872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.284005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.284030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.284212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.284237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.284370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.284396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.284551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.284576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.284702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.284728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.284874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.284899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.285056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.285081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.285206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.285231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.285382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.285422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.285556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.285583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.285739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.285765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.285896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.285922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.286078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.286103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.286252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.286277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.286409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.286435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.286563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.286588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.286758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.286783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.287020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.287045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.798 [2024-07-12 16:02:56.287228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.798 [2024-07-12 16:02:56.287253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.798 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.287386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.287412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.287545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.287570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.287723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.287748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.287873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.287899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.288070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.288095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.288248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.288278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.288464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.288490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.288621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.288646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.288801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.288826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.288973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.288998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.289137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.289163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.289293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.289325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.289458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.289484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.289641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.289667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.289820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.289845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.289991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.290016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.290149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.290174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.290304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.290338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.290493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.290518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.290704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.290730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.290884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.290909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.291060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.291086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.291216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.291241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.291374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.291400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.291553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.291578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.291737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.291763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.291918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.291943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.292094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.292119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.292245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.292271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.292418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.292443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.292573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.292598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.292756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.292780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.292919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.292945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.293100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.293126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.293255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.293281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.293441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.293466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.293608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.293634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.799 [2024-07-12 16:02:56.293773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.799 [2024-07-12 16:02:56.293798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.799 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.293976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.294002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.294158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.294184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.294313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.294344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.294479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.294506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.294686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.294711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.294861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.294886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.295011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.295036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.295165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.295190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.295345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.295371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.295498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.295525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.295655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.295680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.295808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.295833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.295955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.295980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.296108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.296133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.296320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.296346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.296475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.296500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.296638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.296663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.296814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.296840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.296996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.297020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.297189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.297214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.297395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.297421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.297555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.297582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.297738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.297765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.297893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.297919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.298074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.298099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.298253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.298278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.298456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.298483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.298641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.298667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.298848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.298873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.299021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.299046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.299212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.299237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.299365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.299391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.299543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.299569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.299733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.299758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.299913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.299943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.300092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.300117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.800 [2024-07-12 16:02:56.300271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.800 [2024-07-12 16:02:56.300296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.800 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.300463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.300490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.300623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.300649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.300796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.300821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.300972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.300998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.301156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.301181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.301357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.301383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.301533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.301558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.301733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.301758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.301906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.301932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.302090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.302115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.302268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.302292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.302437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.302463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.302643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.302668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.302795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.302822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.302953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.302978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.303131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.303157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.303339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.303365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.303493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.303519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.303651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.303676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.303834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.303859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.303986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.304012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.304187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.304212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.304347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.304373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.304551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.304576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.304712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.304737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.304871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.304897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.305069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.305095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.305268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.305293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.305422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.305448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.305599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.305625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.305772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.305797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.305928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.305953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.306109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.306134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.306285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.306310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.306472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.801 [2024-07-12 16:02:56.306499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.801 qpair failed and we were unable to recover it. 00:26:26.801 [2024-07-12 16:02:56.306628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.306654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.306777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.306803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.306962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.306992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.307149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.307174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.307302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.307335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.307464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.307490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.307616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.307642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.307768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.307794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.307953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.308105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.308130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.308252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.308277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.308409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.308435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.308557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.308583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.308739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.308765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.308885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.308910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.309033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.309058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.309216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.309241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.309389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.309415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.309546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.309572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.309714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.309740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.309869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.309895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.310027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.310052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.310205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.310229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.310417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.310442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.310567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.310594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.310750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.310775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.310932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.310958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.311117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.311141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.311296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.311326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.311468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.311494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.311649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.311673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.311805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.311830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.311986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.312011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.312147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.312172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.312299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.312335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.312484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.312509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.312667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.312692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.312818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.312843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.802 [2024-07-12 16:02:56.313020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.802 [2024-07-12 16:02:56.313045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.802 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.313175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.313200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.313354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.313380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.313558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.313583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.313711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.313741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.313878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.313903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.314057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.314082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.314238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.314263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.314416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.314441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.314585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.314610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.314734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.314760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.314938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.314963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.315130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.315155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.315332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.315358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.315507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.315532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.315716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.315741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.315918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.315943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.316075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.316100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.316235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.316261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.316394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.316421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.316594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.316620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.316756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.316781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.316940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.316965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.317120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.317145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.317299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.317329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.317479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.317504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.317657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.317682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.317835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.317860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.318026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.318051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.318205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.318230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.318382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.318408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.318544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.318570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.318735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.318761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.318893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.318919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.319081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.319106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.319261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.319286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.319420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.319445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.319600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.319625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.319797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.319822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.319949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.803 [2024-07-12 16:02:56.319974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.803 qpair failed and we were unable to recover it. 00:26:26.803 [2024-07-12 16:02:56.320100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.320126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.320268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.320293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.320432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.320458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.320638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.320664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.320817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.320846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.320980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.321005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.321173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.321198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.321337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.321363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.321543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.321569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.321695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.321720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.321881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.321906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.322055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.322080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.322232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.322257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.322390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.322415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.322568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.322593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.322755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.322781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.322933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.322958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.323110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.323135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.323265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.323291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.323454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.323480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.323639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.323666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.323821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.323846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.324026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.324051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.324207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.324232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.324366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.324391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.324548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.324573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.324706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.324731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.324897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.324922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.325117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.325143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.325302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.325335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.325487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.325513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.325685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.325710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.325896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.325921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.326043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.326068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.326219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.326244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.326368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.326393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.326527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.326553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.326727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.326753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.326908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.326933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.327062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.804 [2024-07-12 16:02:56.327087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.804 qpair failed and we were unable to recover it. 00:26:26.804 [2024-07-12 16:02:56.327235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.327260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.327421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.327446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.327584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.327610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.327736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.327762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.327914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.327944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.328095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.328121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.328243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.328268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.328396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.328421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.328553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.328580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.328731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.328756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.328887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.328912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.329030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.329055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.329184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.329208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.329354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.329379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.329506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.329530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.329655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.329680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.329808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.329833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.330012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.330036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.330224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.330248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.330379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.330405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.330566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.330590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.330720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.330744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.330873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.330898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.331048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.331074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.331203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.331228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.331412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.331438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.331590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.331615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.331782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.331807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.331967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.331994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.332150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.332176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.332357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.332384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.332543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.332568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.332697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.332738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.332918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.332943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.333135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.333160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.333291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.333321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.333484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.333509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.333639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.333664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.805 [2024-07-12 16:02:56.333795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.805 [2024-07-12 16:02:56.333821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.805 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.334003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.334028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.334154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.334179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.334328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.334354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.334489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.334514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.334644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.334670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.334791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.334820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.334982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.335008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.335169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.335194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.335338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.335364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.335489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.335515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.335643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.335668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.335826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.335851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.336006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.336031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.336163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.336190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.336324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.336350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.336501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.336526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.336689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.336714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.336840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.336868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.337056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.337081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.337241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.337266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.337400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.337426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.337584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.337609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.337737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.337762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.337914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.337941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.338106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.338132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.338257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.338282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.338438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.338464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.338625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.338649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.338778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.338804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.338937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.338963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.339118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.339143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.339293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.339324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.339466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.339491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.339621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.339646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.339795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.339820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.339971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.806 [2024-07-12 16:02:56.339995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.806 qpair failed and we were unable to recover it. 00:26:26.806 [2024-07-12 16:02:56.340172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.340197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.340328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.340354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.340490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.340515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.340663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.340688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.340808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.340833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.340989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.341014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.341169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.341196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.341377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.341403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.341557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.341582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.341734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.341765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.341902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.341927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.342107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.342132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.342257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.342282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.342415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.342442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.342594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.342620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.342778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.342803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.342955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.342980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.343157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.343182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.343333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.343359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.343493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.343518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.343652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.343677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.343832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.343858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.344035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.344060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.344220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.344245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.344371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.344398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.344530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.344555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.344704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.344729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.344878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.344903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.345058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.345083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.345249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.345274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.345441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.345468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.345601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.345627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.345758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.345784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.345933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.345958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.346113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.346138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.346292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.346323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.346453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.346478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.346637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.346663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.346820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.346846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.346971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.346997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.347151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.807 [2024-07-12 16:02:56.347176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.807 qpair failed and we were unable to recover it. 00:26:26.807 [2024-07-12 16:02:56.347331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.347356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.347509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.347534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.347688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.347714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.347869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.347894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.348043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.348068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.348194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.348219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.348385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.348410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.348536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.348561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.348741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.348770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.348924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.348949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.349121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.349147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.349271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.349296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.349434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.349460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.349615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.349641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.349770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.349795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.349953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.349978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.350134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.350159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.350307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.350338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.350474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.350500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.350645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.350670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.350798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.350823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.350951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.350977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.351118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.351143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.351300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.351331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.351489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.351516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.351653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.351678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.351831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.351857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.352036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.352061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.352213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.352238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.352378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.352404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.352561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.352586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.352735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.352760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.352945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.352971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.353100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.353125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.353302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.353332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.353489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.353515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.353648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.353673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.353852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.353878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.354045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.354072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.354248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.354274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.354429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.808 [2024-07-12 16:02:56.354455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.808 qpair failed and we were unable to recover it. 00:26:26.808 [2024-07-12 16:02:56.354587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.354611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.354750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.354776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.354934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.354959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.355118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.355144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.355309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.355352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.355505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.355530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.355686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.355711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.355895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.355924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.356058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.356083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.356215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.356240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.356419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.356445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.356575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.356601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.356752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.356777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.356939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.356964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.357100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.357125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.357254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.357280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.357519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.357544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.357723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.357747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.357901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.357926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.358077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.358102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.358250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.358275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.358437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.358463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.358620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.358646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.358772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.358798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.358930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.358955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.359112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.359137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.359262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.359288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.359461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.359487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.359720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.359746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.359873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.359898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.360076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.360101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.360232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.360257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.360415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.360441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.360602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.360627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.360806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.360832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.360957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.360982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.361135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.361161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.361323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.361349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.361479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.361504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.361627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.361653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.809 [2024-07-12 16:02:56.361781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.809 [2024-07-12 16:02:56.361806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.809 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.361926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.361952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.362079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.362104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.362261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.362286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.362446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.362472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.362602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.362627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.362807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.362832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.362958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.362988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.363146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.363171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.363404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.363430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.363582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.363608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.363787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.363813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.363940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.363965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.364096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.364121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.364258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.364284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.364453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.364479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.364636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.364661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.364792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.364817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.364937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.364961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.365113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.365139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.365291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.365320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.365559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.365584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.365715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.365741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.365916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.365942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.366126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.366151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.366304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.366334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.366483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.366509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.366686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.366711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.366888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.366913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.367067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.367092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.367246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.367272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.367413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.367439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.367570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.367596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.367750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.367776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.367956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.367982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.368139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.810 [2024-07-12 16:02:56.368165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.810 qpair failed and we were unable to recover it. 00:26:26.810 [2024-07-12 16:02:56.368327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.368353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.368483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.368508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.368637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.368664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.368823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.368848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.369003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.369029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.369165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.369190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.369345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.369372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.369530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.369556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.369716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.369742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.369871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.369896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.370078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.370103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.370232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.370261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.370417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.370443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.370592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.370618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.370793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.370819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.370948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.370973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.371132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.371157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.371282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.371308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.371492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.371518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.371638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.371664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.371822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.371848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.372079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.372104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.372234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.372260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.372492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.372517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.372649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.372675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.372912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.372937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.373063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.373089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.373267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.373292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.373501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.373545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.373761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.373796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.373975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.374011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.374175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.374201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.374345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.374371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.374505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.374533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.374667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.374692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.374871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.374896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.375028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.375056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.375188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.375215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.375373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.375400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.375556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.375582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.811 [2024-07-12 16:02:56.375739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.811 [2024-07-12 16:02:56.375766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.811 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.375897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.375923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.376078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.376103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.376258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.376284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.376415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.376442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.376576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.376602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.376726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.376752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.376985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.377011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.377184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.377210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.377335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.377363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.377525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.377551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.377678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.377709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.377864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.377890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.378043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.378068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.378197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.378224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.378407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.378433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.378553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.378579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.378731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.378756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.378883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.378908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.379025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.379058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.379208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.379234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.379365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.379390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.379526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.379553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.379737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.379764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.379914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.379939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.380093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.380119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.380274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.380300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.380461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.380487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.380613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.380638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.380770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.380797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.380975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.381000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.381132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.381157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.381285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.381310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.381510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.381536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.381671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.381696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.381825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.381850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.381998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.382023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.382182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.382207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.382363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.382389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.382543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.382569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.382702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.382729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.812 qpair failed and we were unable to recover it. 00:26:26.812 [2024-07-12 16:02:56.382905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.812 [2024-07-12 16:02:56.382931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.383086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.383113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.383297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.383334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.383570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.383600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.383778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.383803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.383961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.383986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.384218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.384244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.384381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.384407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.384638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.384664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.384853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.384879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.385032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.385062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.385242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.385267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.385434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.385460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.385589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.385615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.385768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.385793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.385947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.385973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.386125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.386151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.386279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.386304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.386470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.386495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.386627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.386652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.386833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.386858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.387011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.387036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.387189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.387215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.387400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.387426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.387582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.387607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.387789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.387814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.387942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.387968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.388144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.388170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.388324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.388350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.388478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.388503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.388652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.388677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.388828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.388853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.389016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.389041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.389170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.389195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.389426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.389452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.389600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.389625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.389749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.389776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.389933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.389962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.390144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.390169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.390304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.390334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.390483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.813 [2024-07-12 16:02:56.390509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.813 qpair failed and we were unable to recover it. 00:26:26.813 [2024-07-12 16:02:56.390637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.390662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.390786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.390812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.390993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.391018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.391143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.391168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.391336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.391362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.391543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.391568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.391721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.391747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.391874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.391900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.392079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.392105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.392260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.392285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.392454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.392479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.392607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.392632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.392784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.392809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.392968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.392993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.393226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.393251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.393433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.393459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.393590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.393616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.393748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.393773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.393950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.393975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.394129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.394154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.394302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.394332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.394489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.394514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.394670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.394695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.394880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.394906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.395061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.395086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.395267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.395292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.395481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.395507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.395639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.395664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.395794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.395819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.395968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.395995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.396169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.396194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.396342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.396376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.396535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.396560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.396728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.396753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.396881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.396906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.814 [2024-07-12 16:02:56.397062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.814 [2024-07-12 16:02:56.397088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.814 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.397239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.397269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.397400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.397426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.397584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.397611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.397765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.397790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.397922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.397947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.398078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.398105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.398233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.398258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.398393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.398420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.398574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.398600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.398748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.398774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.398927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.398952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.399130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.399154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.399306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.399337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.399501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.399527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.399661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.399686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.399815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.399840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.399975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.400000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.400230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.400255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.400411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.400436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.400573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.400599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.400754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.400780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.400933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.400958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.401123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.401148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.401305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.401337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.401505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.401531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.401653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.401678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.401805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.401830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.401965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.401991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.402164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.402189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.402325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.402352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.402517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.402543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.402714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.402739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.402868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.402893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.403042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.403068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.403196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.403222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.403400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.403426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.403572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.403597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.403714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.403740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.403866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.403891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.404070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.815 [2024-07-12 16:02:56.404096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.815 qpair failed and we were unable to recover it. 00:26:26.815 [2024-07-12 16:02:56.404263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.404293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.404451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.404477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.404625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.404650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.404805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.404831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.405009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.405035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.405160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.405185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.405341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.405366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.405499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.405524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.405650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.405675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.405833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.405860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.405988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.406013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.406148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.406173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.406334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.406360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.406519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.406544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.406673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.406699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.406834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.406859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.406984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.407009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.407198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.407224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.407379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.407404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.407533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.407559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.407716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.407741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.407867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.407893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.408074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.408100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.408232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.408257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.408406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.408441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.408571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.408597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.408770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.408794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.408941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.408965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.409142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.409166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.409323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.409348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.409486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.409510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.409665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.409689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.409845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.409869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.410024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.410049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.410204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.410228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.410408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.410443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.410598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.410623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.410777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.410801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.410924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.410949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.411086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.411111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.816 [2024-07-12 16:02:56.411266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.816 [2024-07-12 16:02:56.411307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.816 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.411455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.411480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.411602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.411628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.411780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.411806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.411923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.411948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.412095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.412120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.412277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.412301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.412448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.412474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.412630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.412654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.412807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.412833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.412983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.413008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.413142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.413168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.413326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.413352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.413512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.413537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.413675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.413700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.413857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.413882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.414021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.414045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.414199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.414223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.414380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.414405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.414558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.414583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.414736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.414761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.414890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.414916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.415066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.415101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.415261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.415286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.415438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.415463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.415617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.415643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.415771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.415796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.415955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.415979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.416133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.416159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.416291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.416334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.416463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.416488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.416617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.416642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.416823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.416847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.416978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.417002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.417122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.417147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.417278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.417303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.417472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.417496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.417657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.417681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.417807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.417832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.417969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.417993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.418150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.418179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.817 [2024-07-12 16:02:56.418339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.817 [2024-07-12 16:02:56.418375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.817 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.418529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.418553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.418706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.418731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.418892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.418916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.419046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.419070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.419226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.419250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.419397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.419421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.434903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.434941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.435095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.435126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.435266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.435294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.435460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.435502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.435698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.435732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.435895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.435928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.436160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.436191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.436359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.436397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.436589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.436623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.436794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.436827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.453519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.453561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.453815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.453850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.454038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.454070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.454231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.454266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.454427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.454463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.454640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.454672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.454886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.454920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.455103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.455135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.455328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.455361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.455566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.455598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.455768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.455799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.455988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.456020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.456240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.456274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.456455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.456480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.456624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.456650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.456821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.456845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.456978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.457001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.457164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.457189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.457351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.818 [2024-07-12 16:02:56.457375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.818 qpair failed and we were unable to recover it. 00:26:26.818 [2024-07-12 16:02:56.457539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.457562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.457745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.457770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.457962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.457986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.458126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.458154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.458333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.458358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.458527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.458551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.458724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.458750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.458947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.458974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.459142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.459168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.459300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.459335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.459497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.459521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.459800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.459825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.460061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.460085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.460224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.460249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.460412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.460437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.460575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.460600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.460775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.460801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.460938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.460964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.461153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.461178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.461320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.461346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.461517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.461543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.461729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.461767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.461966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.461992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.462155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.462180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.462312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.462343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.462489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.462514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.462657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.462682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.462838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.462862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.463002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.463028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.463190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.463214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.463389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.463417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.463556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.463586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.463773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.463798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.463963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.463988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.464173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.464198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.464439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.464465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.464631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.464655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.464793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.464818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.464976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.465000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.465152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.465177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.465313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.465343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.465514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.819 [2024-07-12 16:02:56.465539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.819 qpair failed and we were unable to recover it. 00:26:26.819 [2024-07-12 16:02:56.465712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.465737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.465877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.465906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.466197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.466221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.466591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.466637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.466883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.466908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.467082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.467107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.467262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.467287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.467462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.467500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.467690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.467716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.467852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.467878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.468045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.468072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.468228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.468254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.468393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.468420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.468560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.468585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.468723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.468750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:26.820 qpair failed and we were unable to recover it. 00:26:26.820 [2024-07-12 16:02:56.468940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.820 [2024-07-12 16:02:56.468966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.396 [2024-07-12 16:02:56.974585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.396 [2024-07-12 16:02:56.974647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.396 [2024-07-12 16:02:56.974827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.396 [2024-07-12 16:02:56.974854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.396 [2024-07-12 16:02:56.974988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.396 [2024-07-12 16:02:56.975014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.396 [2024-07-12 16:02:56.975177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.396 [2024-07-12 16:02:56.975202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.396 [2024-07-12 16:02:56.975394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.396 [2024-07-12 16:02:56.975421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.396 [2024-07-12 16:02:56.975587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.396 [2024-07-12 16:02:56.975640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.396 [2024-07-12 16:02:56.975824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.396 [2024-07-12 16:02:56.975865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.396 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.976058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.976084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.976244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.976270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.976443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.976468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.976613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.976641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.976807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.976833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.977001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.977029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.977173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.977200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.977377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.977403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.977588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.977617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.977784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.977810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.977970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.977996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.978122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.978164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.978353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.978387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.978527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.978553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.978700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.978726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.978848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.978874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.979006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.979032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.979189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.979216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.979348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.979384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.979536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.979563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.979767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.979794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.979957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.979983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.980146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.980173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.980367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.980409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.980575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.980612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.980787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.980814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.980971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.980998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.981180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.981206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.981385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.981413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.981574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.981600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.981758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.981785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.981936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.981962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.982103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.982131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.982291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.982324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.982480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.982506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.982637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.982664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.982828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.982855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.982987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.983014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.983147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.983173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.983354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.983380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.397 qpair failed and we were unable to recover it. 00:26:27.397 [2024-07-12 16:02:56.983530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.397 [2024-07-12 16:02:56.983556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.983689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.983715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.983871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.983897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.984070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.984096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.984255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.984281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.984446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.984473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.984635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.984663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.984818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.984845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.984997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.985024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.985187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.985213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.985366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.985393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.985544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.985586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.985752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.985781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.985967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.985994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.986176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.986203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.986363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.986391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.986530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.986557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.986737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.986763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.986922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.986948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.987133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.987160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.987294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.987328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.987461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.987487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.987619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.987646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.987803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.987829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.988003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.988029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.988176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.988203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.988325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.988352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.988505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.988531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.988704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.988731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.988890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.988917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.989097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.989124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.989336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.989363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.989540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.989572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.989710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.989737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.989867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.989893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.990074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.990100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.990272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.990298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.990465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.990491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.990674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.990700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.990853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.990880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.991007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.398 [2024-07-12 16:02:56.991034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.398 qpair failed and we were unable to recover it. 00:26:27.398 [2024-07-12 16:02:56.991191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.991218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.991383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.991410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.991593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.991620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.991771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.991797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.991960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.991986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.992159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.992186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.992322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.992349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.992483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.992509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.992684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.992711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.992891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.992918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.993075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.993102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.993231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.993257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.993396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.993423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.993603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.993629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.993760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.993787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.993945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.993972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.994128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.994155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.994282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.994309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.994478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.994505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.994659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.994685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.994843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.994869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.995048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.995075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.995230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.995257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.995418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.995444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.995603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.995630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.995777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.995803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.995987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.996013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.996158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.996185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.996340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.996367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.996549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.996576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.996729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.996755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.996886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.996917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.997082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.997109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.997237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.997264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.997421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.997449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.997628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.997655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.997795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.997822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.997973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.998000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.998147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.998174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.998331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.998359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.399 [2024-07-12 16:02:56.998514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.399 [2024-07-12 16:02:56.998540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.399 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:56.998721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:56.998748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:56.998928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:56.998955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:56.999138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:56.999165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:56.999300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:56.999332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:56.999474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:56.999501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:56.999685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:56.999712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:56.999867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:56.999893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.000025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.000052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.000203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.000230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.000411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.000438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.000592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.000619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.000782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.000809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.000939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.000966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.001120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.001146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.001274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.001301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.001469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.001496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.001645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.001672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.001841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.001868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.001998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.002025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.002183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.002210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.002364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.002391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.002546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.002574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.002727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.002754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.002890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.002918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.003098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.003124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.003255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.003282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.003421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.003448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.003604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.003631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.003761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.003788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.003945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.003972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.004118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.004148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.004286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.004313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.004479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.004507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.004661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.004688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.004842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.004870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.005005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.005032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.005192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.005220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.005384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.005411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.005534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.005561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.400 qpair failed and we were unable to recover it. 00:26:27.400 [2024-07-12 16:02:57.005687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.400 [2024-07-12 16:02:57.005715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.005870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.005898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.006054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.006081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.006238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.006265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.006393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.006421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.006606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.006633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.006792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.006819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.006999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.007025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.007153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.007179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.007375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.007402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.007519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.007546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.007704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.007731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.007898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.007926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.008081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.008107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.008269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.008295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.008434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.008461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.008602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.008628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.008762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.008789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.008945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.008972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.009102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.009129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.009287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.009320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.009489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.009516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.009675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.009701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.009859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.009886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.010014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.010042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.010195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.010222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.010376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.010403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.010564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.010591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.010715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.010742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.010899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.010926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.011081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.011108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.011233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.011265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.011451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.401 [2024-07-12 16:02:57.011479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.401 qpair failed and we were unable to recover it. 00:26:27.401 [2024-07-12 16:02:57.011636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.011662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.011836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.011863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.012017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.012043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.012177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.012204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.012357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.012384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.012537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.012564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.012691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.012718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.012896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.012923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.013076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.013102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.013235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.013262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.013393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.013421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.013596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.013623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.013803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.013830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.013976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.014003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.014181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.014208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.014363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.014390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.014551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.014577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.014707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.014735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.014921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.014947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.015091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.015117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.015298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.015337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.015509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.015536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.015722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.015749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.015903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.015929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.016081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.016108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.016246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.016273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.016428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.016455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.016583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.016609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.016780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.016808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.016963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.016990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.017168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.017194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.017353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.017380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.017559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.017585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.017711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.017739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.017916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.017942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.018100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.018126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.018287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.018319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.018450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.018476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.018657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.018688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.018870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.018897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.402 qpair failed and we were unable to recover it. 00:26:27.402 [2024-07-12 16:02:57.019078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.402 [2024-07-12 16:02:57.019105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.019261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.019289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.019459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.019487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.019622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.019649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.019830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.019857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.019989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.020015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.020170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.020197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.020339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.020366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.020524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.020551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.020731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.020758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.020916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.020943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.021111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.021138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.021302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.021334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.021490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.021517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.021666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.021693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.021876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.021903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.022039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.022067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.022220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.022247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.022402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.022429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.022607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.022634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.022788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.022815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.022972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.022998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.023150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.023177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.023340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.023368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.023499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.023526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.023690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.023718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.023872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.023899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.024073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.024100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.024218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.024244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.024425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.024452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.024606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.024633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.024790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.024817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.024997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.025024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.025175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.025202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.025374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.025402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.025582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.025609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.025789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.025816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.025992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.026019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.026177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.026208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.026379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.026407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.403 qpair failed and we were unable to recover it. 00:26:27.403 [2024-07-12 16:02:57.026539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.403 [2024-07-12 16:02:57.026566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.026799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.026826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.027010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.027038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.027185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.027216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.027403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.027430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.027586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.027613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.027738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.027766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.027902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.027929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.028108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.028135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.028288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.028320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.028476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.028502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.028658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.028685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.028866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.028893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.029047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.029074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.029231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.029258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.029438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.029465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.029623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.029650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.029884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.029910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.030083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.030110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.030266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.030294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.030448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.030475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.030658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.030685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.030841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.030868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.031054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.031080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.031209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.031236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.031373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.031401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.031573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.031600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.031750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.031777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.031930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.031958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.032109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.032135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.032299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.032332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.032507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.032534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.032687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.032714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.032861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.032888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.033039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.033066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.033221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.033249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.033430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.033457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.033617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.033643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.033828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.033859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.033976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.034002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.034158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.404 [2024-07-12 16:02:57.034185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.404 qpair failed and we were unable to recover it. 00:26:27.404 [2024-07-12 16:02:57.034346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.034373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.034549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.034575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.034756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.034783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.034920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.034947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.035105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.035132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.035288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.035328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.035484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.035511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.035743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.035770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.035947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.035974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.036110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.036137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.036269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.036296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.036461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.036488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.036621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.036649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.036798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.036825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.037016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.037043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.037170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.037196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.037328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.037355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.037508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.037534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.037712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.037738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.037884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.037910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.038065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.038091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.038246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.038273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.038405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.038432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.038587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.038614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.038792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.038822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.038954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.038981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.039110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.039137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.039300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.039334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.039487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.039514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.039648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.039676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.039840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.039867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.040022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.040048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.040202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.040230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.040412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.040439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.040575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.405 [2024-07-12 16:02:57.040602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.405 qpair failed and we were unable to recover it. 00:26:27.405 [2024-07-12 16:02:57.040759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.040786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.040961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.040987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.041115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.041142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.041285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.041312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.041477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.041503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.041653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.041679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.041805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.041833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.041968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.041995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.042149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.042176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.042335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.042362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.042489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.042517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.042658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.042685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.042809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.042836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.042990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.043017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.043139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.043167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.043351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.043379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.043514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.043541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.043719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.043746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.043872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.043898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.044031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.044058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.044230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.044256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.044417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.044444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.044613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.044640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.044800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.044827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.044956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.044983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.045137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.045163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.045290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.045322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.045462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.045489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.045647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.045673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.045830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.045861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.045991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.046018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.046145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.046170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.046325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.046352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.046481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.046509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.046690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.046717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.046874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.046900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.047082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.047109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.047280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.047307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.047495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.047523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.047680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.047707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.406 qpair failed and we were unable to recover it. 00:26:27.406 [2024-07-12 16:02:57.047874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.406 [2024-07-12 16:02:57.047901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.048073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.048099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.048279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.048305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.048464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.048491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.048627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.048653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.048812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.048839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.048970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.048997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.049161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.049188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.049358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.049386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.049568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.049595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.049723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.049751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.049912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.049939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.050094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.050121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.050279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.050306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.050450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.050477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.050625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.050652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.050823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.050850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.051006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.051032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.051190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.051217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.051372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.051400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.051555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.051582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.051701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.051727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.051897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.051924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.052078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.052104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.052257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.052284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.052451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.052478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.052658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.052685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.052816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.052843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.052995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.053022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.053178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.053212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.053362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.053389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.053549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.053576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.053729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.053756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.053935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.053962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.054133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.054159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.054306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.054340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.054497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.054525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.054705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.054732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.054864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.054892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.055047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.055076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.055208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.407 [2024-07-12 16:02:57.055236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.407 qpair failed and we were unable to recover it. 00:26:27.407 [2024-07-12 16:02:57.055420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.055447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.055594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.055621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.055806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.055833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.056014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.056041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.056221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.056248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.056405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.056433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.056584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.056611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.056738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.056766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.056895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.056922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.057082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.057110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.057240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.057267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.057421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.057448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.057574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.057601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.057755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.057782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.057962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.057990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.058150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.058178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.058335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.058363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.058514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.058541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.058665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.058692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.058845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.058872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.058996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.059024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.059180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.059207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.059365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.059394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.059554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.059581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.059741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.059768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.059935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.059962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.060130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.060156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.060313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.060346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.060527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.060559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.060712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.060739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.060918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.060945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.061102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.061128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.061308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.061340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.061524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.061551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.061728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.061755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.061904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.061931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.062110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.062137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.062266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.062293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.062455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.062481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.062657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.062683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.408 qpair failed and we were unable to recover it. 00:26:27.408 [2024-07-12 16:02:57.062839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.408 [2024-07-12 16:02:57.062866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.063000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.063027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.063189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.063216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.063398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.063426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.063556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.063584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.063763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.063790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.063921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.063948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.064103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.064131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.064289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.064323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.064488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.064514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.064667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.064694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.064825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.064852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.065031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.065058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.065221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.065247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.065414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.065442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.065601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.065629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.065810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.065836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.065993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.066020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.066175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.066202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.066354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.066382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.066564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.066591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.066721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.066747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.066907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.066934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.067086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.067113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.067295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.067326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.067504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.067530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.067683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.067710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.067868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.067895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.068051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.068083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.068240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.068268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.068405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.068433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.068587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.068613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.068769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.068797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.068945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.068971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.409 [2024-07-12 16:02:57.069142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.409 [2024-07-12 16:02:57.069168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.409 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.069321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.069348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.069470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.069497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.069655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.069682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.069863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.069890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.070024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.070052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.070210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.070237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.070396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.070423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.070561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.070589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.070721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.070748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.070873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.070900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.071057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.071085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.071238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.071266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.071404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.071432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.071593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.071620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.071746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.071774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.071952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.071978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.072113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.072141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.072265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.072291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.072454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.072481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.072633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.072660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.072823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.072850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.072980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.073007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.073163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.073190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.073367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.073395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.073529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.073555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.073705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.073732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.073944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.073971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.074122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.074149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.074325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.074352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.074481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.074508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.074663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.074690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.074821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.074847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.075003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.075031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.075196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.075226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.075386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.075413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.075546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.075573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.075760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.075787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.075940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.075967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.076125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.076151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.410 [2024-07-12 16:02:57.076339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.410 [2024-07-12 16:02:57.076366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.410 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.076525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.076552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.076703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.076729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.076890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.076918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.077061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.077089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.077240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.077266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.077429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.077456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.077615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.077642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.077801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.077828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.078009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.078035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.078207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.078234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.078388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.078415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.078577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.078604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.078756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.078782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.078963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.078990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.079146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.079173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.079330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.079356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.079493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.079519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.079685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.079713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.079871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.079898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.080056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.080083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.080221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.080247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.080380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.080406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.080564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.080591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.080767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.080794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.080947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.080974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.081121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.081147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.081329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.081356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.081489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.081517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.081701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.081729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.081862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.081889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.082029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.082056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.082214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.082242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.082407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.082434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.082589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.082621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.082757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.082785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.082940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.082968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.083102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.083129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.083260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.083286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.083446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.083474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.083639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.083667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.411 qpair failed and we were unable to recover it. 00:26:27.411 [2024-07-12 16:02:57.083824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.411 [2024-07-12 16:02:57.083850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.084010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.084037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.084167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.084195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.084356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.084382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.084535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.084562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.084727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.084753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.084908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.084935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.085097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.085124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.085287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.085318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.085476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.085503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.085679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.085705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.085839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.085865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.086045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.086072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.086230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.086257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.086416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.086443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.086607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.086635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.086791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.086819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.086971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.086997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.087151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.087178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.087331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.087358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.087493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.087519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.087655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.087683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.087845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.087872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.087994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.088020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.088179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.088206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.088363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.088391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.088542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.088570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.088799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.088847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.089139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.089186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.089532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.089600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.089839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.089866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.090020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.090048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.090204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.090230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.090412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.090456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.090662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.090697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.090904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.090939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.091143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.091178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.091451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.091485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.091765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.091830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.092084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.092132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.412 [2024-07-12 16:02:57.092441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.412 [2024-07-12 16:02:57.092475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.412 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.092760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.092826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.093189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.093255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.093476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.093508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.093830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.093902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.094218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.094265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.094523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.094557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.094886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.094960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.095216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.095262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.095526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.095567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.095879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.095945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.096256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.096289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.096464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.096498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.096690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.096717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.096994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.097059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.097373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.097406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.097571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.097626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.097911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.097945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.098173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.098239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.098524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.098558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.098820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.098887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.099245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.099353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.099574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.099631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.099899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.099933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.100126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.100153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.100287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.100320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.100481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.100529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.100718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.100752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.101000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.101066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.101331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.101385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.101548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.101581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.101925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.101996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.102275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.102309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.102513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.102551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.102800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.102827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.102967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.103008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.103313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.103398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.103604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.103660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.103958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.103985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.104142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.104169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.104298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.104333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.413 qpair failed and we were unable to recover it. 00:26:27.413 [2024-07-12 16:02:57.104580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.413 [2024-07-12 16:02:57.104613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.104881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.104946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.105162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.105190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.105364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.105405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.105567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.105595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.105972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.106036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.106338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.106394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.106760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.106825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.107068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.107100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.107341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.107389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.107714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.107791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.108166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.108228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.108550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.108615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.108993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.109068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.109341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.109375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.109593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.109648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.109929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.109993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.110279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.110333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.110601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.110645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.110834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.110863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.111044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.111077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.111360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.111425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.111740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.111768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.111929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.111956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.112109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.112183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.112458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.112512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.112875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.112944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.113214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.113265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.113605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.113673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.113998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.114073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.114372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.114420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.114804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.114883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.115272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.414 [2024-07-12 16:02:57.115372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.414 qpair failed and we were unable to recover it. 00:26:27.414 [2024-07-12 16:02:57.115641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.115688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.116040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.116107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.116344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.116415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.116798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.116877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.117312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.117388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.117702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.117763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.118109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.118191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.118512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.118574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.118947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.119014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.119305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.119371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.119640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.119685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.120020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.120097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.120389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.120436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.120783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.120848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.121180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.121252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.121562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.121609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.121927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.121990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.122347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.122414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.122689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.122736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.123051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.123119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.123368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.123416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.123775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.123842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.124179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.124254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.124538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.124585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.124919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.124987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.125278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.125334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.125593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.125640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.125938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.126002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.126293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.688 [2024-07-12 16:02:57.126349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.688 qpair failed and we were unable to recover it. 00:26:27.688 [2024-07-12 16:02:57.126602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.126649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.126942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.127005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.127265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.127310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.127567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.127593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.127796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.127865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.128225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.128293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.128601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.128628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.128771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.128796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.129076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.129139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.129416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.129482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.129778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.129861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.130195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.130260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.130597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.130671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.131039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.131105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.131391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.131438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.131777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.131843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.132161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.132203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.132367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.132395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.132553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.132579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.132756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.132824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.133096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.133161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.133467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.133534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.133892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.133959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.134242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.134288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.134602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.134629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.134843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.134870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.135174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.135241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.135553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.135600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.135980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.136047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.136334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.136381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.136683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.136750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.137103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.137174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.137465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.137512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.137806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.137871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.138205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.138270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.138501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.138528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.138662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.138715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.139092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.139158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.139436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.139501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.139867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.689 [2024-07-12 16:02:57.139946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.689 qpair failed and we were unable to recover it. 00:26:27.689 [2024-07-12 16:02:57.140203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.140250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.140630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.140697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.141075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.141143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.141437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.141483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.141868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.141933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.142300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.142382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.142669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.142715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.143033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.143106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.143371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.143399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.143572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.143598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.143861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.143892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.144146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.144174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.144420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.144467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.144764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.144829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.145180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.145264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.145631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.145698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.146041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.146108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.146415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.146470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.146828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.146893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.147260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.147335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.147599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.147646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.147974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.148040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.148337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.148385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.148621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.148667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.149041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.149109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.149398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.149445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.149759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.149824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.150151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.150217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.150481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.150529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.150868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.150934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.151158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.151182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.151336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.151363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.151487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.151513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.151857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.151925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.152256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.152338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.152576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.152622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.152918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.152983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.153221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.153268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.153561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.153608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.153925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.153998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.690 [2024-07-12 16:02:57.154256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.690 [2024-07-12 16:02:57.154304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.690 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.154651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.154698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.154993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.155062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.155283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.155345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.155603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.155650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.156005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.156069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.156326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.156373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.156623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.156669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.156975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.157042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.157300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.157360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.157617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.157675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.157976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.158004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.158196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.158244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.158544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.158592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.158965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.159034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.159340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.159387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.159650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.159696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.159988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.160054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.160307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.160367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.160655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.160701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.161075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.161125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.161422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.161470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.161739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.161786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.162143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.162213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.162476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.162523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.162745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.162772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.163051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.163118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.163410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.163457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.163752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.163780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.163994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.164037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.164236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.164283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.164570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.164617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.164962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.165027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.165287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.165347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.165609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.165656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.165980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.166047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.166306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.166363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.166612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.166654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.166925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.166996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.167275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.167302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.167521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.167571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.691 qpair failed and we were unable to recover it. 00:26:27.691 [2024-07-12 16:02:57.167892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.691 [2024-07-12 16:02:57.167918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.168089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.168116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.168341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.168388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.168651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.168697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.168971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.168998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.169167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.169194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.169465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.169512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.169791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.169819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.170003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.170031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.170334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.170381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.170627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.170673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.171044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.171110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.171378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.171424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.171720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.171766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.172099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.172166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.172430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.172478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.172803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.172866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.173194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.173259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.173531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.173578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.173910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.173981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.174352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.174416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.174702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.174750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.175117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.175183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.175544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.175615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.175972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.176038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.176336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.176384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.176639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.176685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.176977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.177041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.177337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.177384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.177698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.177771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.178096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.178169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.178438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.178466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.178622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.178664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.178821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.692 [2024-07-12 16:02:57.178881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.692 qpair failed and we were unable to recover it. 00:26:27.692 [2024-07-12 16:02:57.179216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.179257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.179517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.179564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.179893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.179969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.180264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.180310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.180613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.180659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.181003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.181068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.181369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.181417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.181733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.181799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.182133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.182198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.182472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.182519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.182881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.182947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.183362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.183409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.183755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.183824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.184089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.184116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.184240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.184265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.184441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.184469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.184669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.184739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.185101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.185167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.185402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.185428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.185555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.185603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.185967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.186029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.186245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.186295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.186681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.186752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.186965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.186990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.187196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.187223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.187490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.187537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.187741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.187769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.188026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.188092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.188348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.188396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.188691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.188755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.189084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.189157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.189508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.189577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.189921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.189986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.190281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.190309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.190475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.190501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.190657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.190739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.191052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.191117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.191385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.191411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.191553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.191577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.191776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.191817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.192126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.693 [2024-07-12 16:02:57.192166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.693 qpair failed and we were unable to recover it. 00:26:27.693 [2024-07-12 16:02:57.192477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.192542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.192908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.192982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.193211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.193257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.193672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.193739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.194072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.194138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.194480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.194550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.194853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.194917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.195171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.195219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.195515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.195582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.195860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.195923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.196192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.196239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.196617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.196684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.197057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.197127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.197397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.197424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.197634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.197705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.198069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.198135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.198451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.198527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.198898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.198967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.199266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.199293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.199559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.199605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.199826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.199852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.200025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.200052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.200297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.200359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.200681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.200747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.201118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.201183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.201459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.201503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.201867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.201932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.202208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.202236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.202401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.202447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.202661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.202702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.202905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.202932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.203299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.203376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.203709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.203776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.204095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.204163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.204531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.204608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.204913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.204940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.205136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.205162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.205468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.205533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.205880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.205943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.206205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.206251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.694 [2024-07-12 16:02:57.206553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.694 [2024-07-12 16:02:57.206600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.694 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.206911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.206988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.207283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.207349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.207588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.207641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.207961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.207987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.208241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.208267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.208512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.208559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.208921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.208984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.209293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.209325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.209593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.209620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.209918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.209983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.210353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.210418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.210683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.210728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.211056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.211124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.211390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.211438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.211718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.211744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.211901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.211927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.212083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.212110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.212239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.212265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.212444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.212470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.212666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.212733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.213052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.213116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.213408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.213455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.213741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.213768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.213909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.213935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.214124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.214150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.214359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.214426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.214792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.214858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.215102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.215403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.215437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.215660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.215694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.215874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.215909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.216101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.216137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.216344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.216379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.216601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.216635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.216932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.216998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.217229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.217276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.217523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.217557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.217727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.217762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.217952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.217986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.218233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.218281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.695 [2024-07-12 16:02:57.218501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.695 [2024-07-12 16:02:57.218541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.695 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.218859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.218893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.219218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.219288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.219506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.219541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.219717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.219744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.219897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.219946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.220208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.220254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.220489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.220525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.220725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.220760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.220928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.220963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.221127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.221161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.221352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.221387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.221583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.221617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.221815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.221850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.222112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.222158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.222406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.222441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.222636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.222670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.222884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.222918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.223135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.223190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.223418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.223454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.223621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.223656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.223813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.223846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.224010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.224045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.224267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.224302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.224499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.224526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.224688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.224714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.224877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.224911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.225144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.225177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.225372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.225406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.225606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.225640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.225833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.225859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.225982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.226009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.226147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.226173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.226307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.226369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.226566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.226601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.226789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.696 [2024-07-12 16:02:57.226815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.696 qpair failed and we were unable to recover it. 00:26:27.696 [2024-07-12 16:02:57.226972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.226999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.227131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.227157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.227321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.227365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.227566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.227600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.227893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.227966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.228274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.228333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.228531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.228566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.228732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.228766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.228982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.229037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.229299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.229371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.229544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.229579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.229766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.229801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.230038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.230065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.230194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.230219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.230342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.230368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.230497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.230524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.230686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.230712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.230871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.230904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.231079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.231107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.231235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.231261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.231426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.231453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.231579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.231605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.231757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.231800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.231998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.232031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.232195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.232229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.232427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.232463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.232692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.232726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.233041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.233109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.233334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.233369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.233547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.233582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.233750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.233784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.233984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.234020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.234194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.234227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.234432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.234458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.234637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.234663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.234871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.234905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.235097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.235132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.235330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.235366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.235561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.235596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.235766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.235799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.236112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.697 [2024-07-12 16:02:57.236186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.697 qpair failed and we were unable to recover it. 00:26:27.697 [2024-07-12 16:02:57.236443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.236470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.236624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.236668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.236837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.236871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.237071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.237111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.237294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.237338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.237510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.237543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.237761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.237787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.237910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.237936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.238139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.238189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.238429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.238464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.238666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.238693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.238818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.238844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.239001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.239043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.239299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.239381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.239583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.239618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.239841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.239875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.240096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.240154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.240386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.240413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.240548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.240575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.240732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.240759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.240985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.241055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.241302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.241346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.241611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.241673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.242035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.242099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.242359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.242405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.242662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.242697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.243018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.243085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.243311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.243353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.243554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.243588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.243753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.243787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.243972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.244006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.244204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.244238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.244435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.244469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.244670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.244704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.244877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.244911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.245073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.245107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.245282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.245325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.245501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.245535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.245804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.245865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.246167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.246213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.698 [2024-07-12 16:02:57.246574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.698 [2024-07-12 16:02:57.246623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.698 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.246941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.246976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.247166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.247201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.247366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.247406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.247665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.247700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.247971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.248038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.248289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.248351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.248646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.248692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.249027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.249095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.249374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.249409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.249605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.249640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.249928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.249995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.250263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.250310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.250617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.250684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.251019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.251086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.251324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.251370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.251598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.251634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.251917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.251983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.252253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.252301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.252643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.252714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.253014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.253049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.253250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.253306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.253622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.253693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.254019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.254085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.254345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.254399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.254656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.254692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.254918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.254982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.255213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.255259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.255546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.255583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.255840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.255875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.256201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.256269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.256597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.256668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.257012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.257076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.257345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.257391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.257660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.257726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.258025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.258089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.258375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.258422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.258778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.258842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.259218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.259284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.259557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.259605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.259911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.259976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.699 [2024-07-12 16:02:57.260307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.699 [2024-07-12 16:02:57.260382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.699 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.260647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.260693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.261042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.261118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.261376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.261423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.261761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.261827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.262093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.262141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.262396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.262432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.262667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.262733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.262989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.263025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.263262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.263309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.263594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.263633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.263811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.263855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.264044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.264078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.264243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.264277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.264577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.264643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.264975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.265040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.265310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.265369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.265641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.265688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.265955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.265990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.266269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.266344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.266582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.266629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.266931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.266996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.267263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.267309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.267591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.267638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.267900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.267936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.268130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.268166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.268429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.268477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.268783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.268847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.269147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.269211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.269500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.269535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.269729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.269763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.270036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.270103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.270376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.270422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.270752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.270817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.271180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.271247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.271505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.271551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.271936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.272000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.272268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.272325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.272610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.272656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.272931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.272966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.273233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.273298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.700 qpair failed and we were unable to recover it. 00:26:27.700 [2024-07-12 16:02:57.273579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.700 [2024-07-12 16:02:57.273626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.273889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.273929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.274194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.274259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.274511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.274547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.274735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.274770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.274946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.274982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.275185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.275240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.275526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.275596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.275902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.275973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.276239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.276286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.276641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.276711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.277045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.277081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.277308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.277376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.277638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.277684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.278017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.278084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.278401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.278448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.278740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.278776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.278995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.279055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.279337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.279388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.279633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.279669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.279898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.279964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.280265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.280343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.280592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.280638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.280933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.280997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.281235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.281271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.281498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.281559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.281872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.281936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.282240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.282303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.282625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.282690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.283027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.283094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.283381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.283427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.283796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.283861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.284209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.284273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.284534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.284580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.284893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.284926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.701 [2024-07-12 16:02:57.285109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.701 [2024-07-12 16:02:57.285144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.701 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.285427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.285492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.285775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.285809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.286007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.286041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.286272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.286329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.286668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.286737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.286994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.287034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.287265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.287311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.287655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.287702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.288058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.288128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.288464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.288510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.288823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.288889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.289165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.289230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.289463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.289497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.289685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.289720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.289889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.289925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.290100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.290137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.290308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.290355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.290578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.290624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.290863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.290899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.291094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.291128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.291293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.291339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.291535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.291567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.291763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.291796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.292003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.292036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.292195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.292228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.292422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.292454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.292630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.292669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.292811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.292842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.292985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.293017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.293221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.293252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.293397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.293428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.293631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.293662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.293846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.293876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.294037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.294068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.294218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.294248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.294431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.294462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.294645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.294676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.294853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.294883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.295050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.295081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.295230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.702 [2024-07-12 16:02:57.295268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.702 qpair failed and we were unable to recover it. 00:26:27.702 [2024-07-12 16:02:57.295452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.295483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.295717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.295762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.295979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.296012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.296191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.296223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.296456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.296486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.296638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.296673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.296914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.296945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.297120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.297152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.297339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.297378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.297525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.297553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.297768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.297799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.297979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.298010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.298161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.298189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.298393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.298423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.298614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.298644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.298819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.298848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.299010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.299038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.299224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.299255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.299427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.299455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.299625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.299653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.299822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.299850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.300018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.300046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.300217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.300246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.300406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.300435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.300598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.300626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.300769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.300795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.300934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.300965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.301156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.301184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.301347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.301384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.301556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.301582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.301732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.301758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.305330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.305371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.305546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.305572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.305759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.305787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.305944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.305972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.306130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.306157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.306328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.306370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.306557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.306593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.306806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.306843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.307086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.307143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.307343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.703 [2024-07-12 16:02:57.307406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.703 qpair failed and we were unable to recover it. 00:26:27.703 [2024-07-12 16:02:57.307609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.307646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.307876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.307933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.308166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.308222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.308431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.308467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.308639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.308684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.308865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.308917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.309139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.309191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.309369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.309405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.309590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.309644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.309808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.309859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.310094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.310130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.310300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.310342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.310515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.310571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.310724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.310759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.310940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.310976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.311133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.311169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.311341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.311384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.311560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.311601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.311803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.311838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.311994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.312029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.312236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.312272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.312478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.312521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.312679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.312709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.312855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.312881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.313042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.313069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.313228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.313254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.313424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.313452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.313584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.313611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.313749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.313775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.313931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.313957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.314146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.314173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.314336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.314371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.314506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.314533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.314720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.314747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.314920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.314947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.315071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.315096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.315254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.315280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.315444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.315470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.315601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.315627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.315784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.315810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.704 [2024-07-12 16:02:57.315963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.704 [2024-07-12 16:02:57.315990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.704 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.316179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.316206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.316339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.316374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.316497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.316523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.316669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.316696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.316867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.316894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.317050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.317077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.317233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.317260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.317423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.317450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.317580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.317607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.317767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.317794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.317949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.317976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.318102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.318130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.318294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.318326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.318518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.318544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.318710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.318738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.318898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.318924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.319085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.319111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.319245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.319274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.319439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.319466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.319606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.319633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.319795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.319825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.319984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.320012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.320168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.320196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.320377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.320404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.320555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.320582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.320737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.320764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.320894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.320919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.321075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.321102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.321273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.321302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.321450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.321476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.321632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.321662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.321794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.321819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.321976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.322003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.322188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.322215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.322372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.322399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.705 [2024-07-12 16:02:57.322521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.705 [2024-07-12 16:02:57.322549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.705 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.322686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.322713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.322846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.322872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.323050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.323077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.323203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.323229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.323391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.323419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.323578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.323606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.323732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.323757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.323913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.323940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.324096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.324123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.324248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.324273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.324440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.324470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.324610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.324637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.324762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.324788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.324942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.324969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.325127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.325155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.325291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.325324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.325509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.325536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.325699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.325726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.325889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.325917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.326046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.326072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.326208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.326236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.326407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.326437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.326594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.326624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.326787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.326816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.326954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.326982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.327135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.327162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.327325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.327352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.327508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.327535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.327704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.327731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.327913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.327939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.328123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.328149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.328297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.328331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.328457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.328482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.328634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.328660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.328804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.328829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.328955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.328981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.329119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.329145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.329303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.329337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.329463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.329488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.329630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.329656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.329838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.706 [2024-07-12 16:02:57.329864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.706 qpair failed and we were unable to recover it. 00:26:27.706 [2024-07-12 16:02:57.329996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.330023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.330179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.330206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.330365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.330396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.330571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.330601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.330735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.330762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.330918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.330944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.331106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.331134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.331300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.331332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.331485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.331516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.331683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.331710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.331869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.331896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.332075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.332102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.332280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.332307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.332471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.332499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.332650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.332678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.332807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.332837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.332990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.333019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.333179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.333206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.333341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.333368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.333491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.333516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.333684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.333715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.333889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.333920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.334068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.334098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.334250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.334281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.334450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.334478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.334678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.334708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.334882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.334913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.335113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.335143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.335312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.335351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.335508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.335535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.335708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.335738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.335914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.335945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.336114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.336143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.336277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.336306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.336490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.336518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.336674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.336705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.336844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.336872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.337047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.337078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.337249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.337279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.337464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.337492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.337692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.337723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.707 qpair failed and we were unable to recover it. 00:26:27.707 [2024-07-12 16:02:57.337905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.707 [2024-07-12 16:02:57.337934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.338133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.338163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.338381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.338409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.338562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.338589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.338743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.338770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.338902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.338928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.339087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.339117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.339270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.339302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.339499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.339526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.339696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.339725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.339874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.339904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.340078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.340108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.340255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.340283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.340489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.340518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.340707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.340734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.340931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.340961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.341110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.341140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.341309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.341364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.341517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.341543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.341690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.341717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.341849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.341894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.342089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.342121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.342309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.342389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.342567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.342610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.342781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.342810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.342981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.343009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.343181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.343211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.343396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.343424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.343553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.343578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.343754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.343783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.343963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.343992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.344197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.344230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.344414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.344441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.344578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.344604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.344782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.344808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.344996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.345026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.345253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.345283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.345442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.345469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.345625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.345652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.345815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.345842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.346001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.708 [2024-07-12 16:02:57.346028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.708 qpair failed and we were unable to recover it. 00:26:27.708 [2024-07-12 16:02:57.346186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.346213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.346386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.346413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.346568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.346594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.346724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.346752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.346951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.346984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.347145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.347174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.347341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.347384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.347513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.347539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.347746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.347775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.347964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.347993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.348149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.348178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.348363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.348392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.348551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.348578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.348762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.348791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.348931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.348959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.349153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.349180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.349310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.349362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.349506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.349535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.349711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.349740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.349907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.349935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.350122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.350151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.350328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.350362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.350496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.350525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.350694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.350738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.350872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.350901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.351092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.351121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.351263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.351293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.351450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.351479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.351618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.351647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.351813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.351842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.352009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.352038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.352216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.352245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.352411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.352440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.352679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.352710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.352893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.352921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.353091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.709 [2024-07-12 16:02:57.353119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.709 qpair failed and we were unable to recover it. 00:26:27.709 [2024-07-12 16:02:57.353301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.353342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.353509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.353538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.353677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.353705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.353880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.353907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.354067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.354094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.354221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.354248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.354376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.354418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.354575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.354603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.354780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.354809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.354953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.354983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.355150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.355182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.355356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.355387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.355549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.355582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.355741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.355770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.355958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.355987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.356173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.356234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.356402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.356432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.356595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.356624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.356771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.356800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.356963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.356992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.357139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.357169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.357323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.357353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.357520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.357550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.357689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.357719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.357880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.357911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.358053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.358082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.358256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.358287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.358452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.358482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.358624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.358653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.358795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.358824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.358974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.359003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.359156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.359185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.359346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.359375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.359520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.359550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.359702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.359733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.359976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.360005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.360174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.360203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.360369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.360399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.360535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.360564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.360701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.360734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.360863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.710 [2024-07-12 16:02:57.360892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.710 qpair failed and we were unable to recover it. 00:26:27.710 [2024-07-12 16:02:57.361061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.361089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.361252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.361281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.361437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.361467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.361609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.361637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.361824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.361852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.362100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.362164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.362443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.362474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.362678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.362706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.362867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.362895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.363061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.363089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.363264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.363306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.363493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.363520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.363684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.363712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.363906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.363947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.364242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.364271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.364424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.364453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.364592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.364621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.364783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.364811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.364983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.365011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.365167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.365210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.365434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.365464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.365651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.365680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.365947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.366024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.366380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.366409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.366574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.366602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.366969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.367058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.367406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.367435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.367620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.367703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.368121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.368181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.368473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.368502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.368668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.368697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.368881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.368915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.369126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.369153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.369327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.369361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.369507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.369534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.369731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.369765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.370083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.370172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.370477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.370505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.370665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.370693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.370988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.371016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.371174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.711 [2024-07-12 16:02:57.371210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.711 qpair failed and we were unable to recover it. 00:26:27.711 [2024-07-12 16:02:57.371414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.371443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.371614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.371642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.371772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.371819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.372023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.372083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.372413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.372442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.372588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.372617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.372749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.372794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.373034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.373062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.373276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.373349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.373551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.373580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.373789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.373852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.374173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.374255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.374500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.374529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.374706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.374740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.375014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.375091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.375408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.375437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.375591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.375654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.375991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.376053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.376382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.376411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.376703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.376793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.377123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.377208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.377509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.377537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.377697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.377725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.378055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.378136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.378450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.378478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.378744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.378831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.379224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.379307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.379532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.379559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.379696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.379726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.379889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.379917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.380082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.380120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.380336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.380383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.712 [2024-07-12 16:02:57.380516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.712 [2024-07-12 16:02:57.380544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.712 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.380760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.380788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.380961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.380988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.381146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.381173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.381327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.381355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.381488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.381516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.381716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.381750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.381959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.381988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.382200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.382258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.382493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.382523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.382784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.382843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.383145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.383174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.383334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.383364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.383500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.383529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.383756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.383834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.384156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.384215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.384460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.384489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.384626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.384655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.384812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.384841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.385004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.385081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.385480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.385568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.385924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.386006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.386377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.386436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.386838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.386914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.387220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.387248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.387459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.387519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.387832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.387866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.388062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.388111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.388371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.388400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.388577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.388605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.388895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.388929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.389243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.389337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.389657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.713 [2024-07-12 16:02:57.389733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.713 qpair failed and we were unable to recover it. 00:26:27.713 [2024-07-12 16:02:57.390015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.390043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.390196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.390234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.390487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.390515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.390652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.390680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.390916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.390995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.391279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.391312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.391486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.391513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.391741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.391817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.392135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.392162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.392417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.392477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.392849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.392930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.393299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.393372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.393724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.393808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.394182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.394259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.394779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.394871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.395199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.395229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.395394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.395423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.395602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.395631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.395948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.396036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.396381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.396441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.396730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.396789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.397086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.397113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.397252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.397280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.397489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.397550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.397887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.397914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.398081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.398111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.398336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.398397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.398610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.398656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.714 [2024-07-12 16:02:57.398868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.714 [2024-07-12 16:02:57.398904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.714 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.399200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.399229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.399417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.399446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.399600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.399661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.399903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.399937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.400140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.400180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.400384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.400445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.400812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.400889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.401243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.401335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.715 [2024-07-12 16:02:57.401626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.715 [2024-07-12 16:02:57.401661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.715 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.401975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.402062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.402435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.402482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.402677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.402729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.402907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.402936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.403100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.403143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.403510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.403573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.403921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.403962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.404134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.404165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.404339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.404380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.404514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.404543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.404715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.404743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.405072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.405166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.405454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.405490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.405683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.405717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.405892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.405927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.406275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.406353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.406675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.406709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.406989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.407077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.407419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.407479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.407823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.407852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.407998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.408028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.408294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.408384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.408730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.408807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.409202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.409278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.409576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.409614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.409747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.409773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.409934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.992 [2024-07-12 16:02:57.410000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.992 qpair failed and we were unable to recover it. 00:26:27.992 [2024-07-12 16:02:57.410288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.410324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.410509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.410536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.410812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.410890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.411251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.411348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.411628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.411656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.411840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.411867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.412013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.412040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.412206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.412264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.412620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.412649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.412816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.412845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.412985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.413014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.413210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.413237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.413430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.413491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.413887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.413964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.414257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.414287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.414479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.414508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.414785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.414865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.415125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.415160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.415338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.415366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.415515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.415543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.415793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.415826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.415989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.416017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.416212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.416247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.416621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.416700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.417058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.417144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.417446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.417473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.417611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.417679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.417980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.418008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.418170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.418198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.418477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.418555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.418951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.419029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.419401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.419461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.419861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.419941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.420346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.420448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.420850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.420926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.421287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.421387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.421660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.421687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.421868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.421895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.422082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.422118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.422389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.422449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.422856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.993 [2024-07-12 16:02:57.422950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.993 qpair failed and we were unable to recover it. 00:26:27.993 [2024-07-12 16:02:57.423356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.423420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.423753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.423830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.424189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.424274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.424549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.424593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.424752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.424780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.424924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.424955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.425227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.425262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.425483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.425517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.425748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.425822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.426127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.426162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.426453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.426514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.426839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.426866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.427051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.427077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.427328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.427390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.427736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.427763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.427919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.427945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.428104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.428131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.428362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.428422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.428832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.428908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.429304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.429395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.429726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.429785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.430177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.430253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.430643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.430704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.431103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.431180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.431483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.431518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.431827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.431906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.432248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.432282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.432485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.432513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.432713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.432742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.433150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.433226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.433574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.433635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.434041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.434117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.434416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.434476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.434870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.434946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.435371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.435431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.435795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.435871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.436289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.436363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.436682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.436740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.436987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.437016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.437153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.437179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.994 [2024-07-12 16:02:57.437345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.994 [2024-07-12 16:02:57.437374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.994 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.437580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.437608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.437746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.437775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.437912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.437939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.438077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.438106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.438441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.438503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.438835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.438895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.439218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.439294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.439610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.439645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.439813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.439847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.440066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.440130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.440448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.440529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.440937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.441018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.441387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.441447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.441839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.441915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.442309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.442401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.442733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.442793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.443152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.443227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.443553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.443581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.443766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.443794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.443975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.444013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.444151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.444179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.444384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.444466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.444795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.444823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.445019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.445044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.445353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.445412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.445714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.445743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.446001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.446080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.446416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.446492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.446879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.446921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.447162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.447243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.447626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.447697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.448095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.448172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.448473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.448534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.448885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.448920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.449107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.449142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.449412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.449447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.449781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.449858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.450223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.450298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.450651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.450711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.451124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.451183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.451563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.995 [2024-07-12 16:02:57.451625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.995 qpair failed and we were unable to recover it. 00:26:27.995 [2024-07-12 16:02:57.451974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.452008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.452307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.452412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.452750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.452810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.453197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.453258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.453578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.453640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.454003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.454065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.454400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.454462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.454853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.454928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.455241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.455336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.455695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.455754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.456155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.456231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.456629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.456691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.457078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.457140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.457506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.457567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.457958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.458034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.458403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.458465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.458861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.458947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.459237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.459264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.459402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.459428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.459588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.459635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.459827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.459891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.460304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.460393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.460715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.460774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.461119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.461196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.461539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.461600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.461964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.462051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.462332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.462360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.462622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.462681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.463050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.463127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.463486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.463549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.463871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.463900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.464062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.464090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.464358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.464419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.464761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.464838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.996 qpair failed and we were unable to recover it. 00:26:27.996 [2024-07-12 16:02:57.465136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.996 [2024-07-12 16:02:57.465164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.465355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.465384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.465744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.465824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.466185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.466275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.466635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.466695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.466992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.467026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.467250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.467279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.467488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.467551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.467946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.468023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.468389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.468450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.468796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.468826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.468999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.469027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.469214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.469242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.469409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.469445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.469613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.469649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.469825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.469862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.470090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.470129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.470306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.470348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.470488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.470514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.470649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.470676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.470861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.470897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.471068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.471104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.471338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.471394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.471661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.471720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.471987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.472031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.472242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.472285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.472582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.472635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.472920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.472973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.473251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.473304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.473617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.473926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.473954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.474115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.474143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.474280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.474309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.474524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.474577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.474844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.474872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.475088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.475142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.475483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.475536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.475869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.475921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.476220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.476273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.476564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.476617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.476905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.476958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.997 [2024-07-12 16:02:57.477227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.997 [2024-07-12 16:02:57.477256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.997 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.477439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.477467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.477609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.477665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.477929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.477985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.478304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.478385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.478696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.478753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.479104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.479157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.479444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.479504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.479839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.479892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.480138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.480198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.480481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.480534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.480942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.481015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.481313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.481386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.481715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.481767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.482061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.482089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.482274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.482302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.482497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.482549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.482861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.482914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.483176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.483227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.483499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.483527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.483706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.483735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.484022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.484075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.484348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.484401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.484683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.484712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.484898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.484926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.485213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.485269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.485571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.485624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.485906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.485960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.486246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.486305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.486625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.486678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.486959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.487016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.487338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.487392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.487719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.487771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.488055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.488109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.488415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.488468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.488763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.488791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.489012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.489073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.489331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.489385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.489662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.489718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.489999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.490052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.490378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.490431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.998 [2024-07-12 16:02:57.490688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.998 [2024-07-12 16:02:57.490717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.998 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.490905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.490959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.491241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.491294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.491592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.491645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.491930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.491987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.492272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.492339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.492631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.492682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.493007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.493059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.493366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.493421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.493767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.493821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 133632 Killed "${NVMF_APP[@]}" "$@" 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.494132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.494184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.494490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.494545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:27.999 [2024-07-12 16:02:57.494822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.494874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:27.999 [2024-07-12 16:02:57.495198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.495252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.999 [2024-07-12 16:02:57.495625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.495680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.999 [2024-07-12 16:02:57.495991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.496045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:27.999 [2024-07-12 16:02:57.496389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.496445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.496744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.496824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.497958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.497991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.498157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.498185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.498366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.498394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.498638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.498691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.498944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.498999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.499169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.499198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.499354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.499393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.499585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.499644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.499901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.499961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.500445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.500476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.500817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.500870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.501118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.501172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.501368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.501419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.501698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.501753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.501987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.502038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.502177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.502205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 [2024-07-12 16:02:57.502510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.502563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=134192 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:27.999 [2024-07-12 16:02:57.502843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:27.999 [2024-07-12 16:02:57.502895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:27.999 qpair failed and we were unable to recover it. 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 134192 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 134192 ']' 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.999 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.000 16:02:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.000 [2024-07-12 16:02:57.505136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.505178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.505449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.505481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.505786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.505843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.506082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.506163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.506451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.506516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.506727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.506774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.506986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.507038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.507227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.507254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.507499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.507547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.507734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.507787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.508029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.508081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.508239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.508264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.508582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.508640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.508926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.508980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.509230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.509283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.509469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.509498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.509727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.509777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.510036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.510087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.510237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.510263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.510404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.510430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.510668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.510716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.510973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.511026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.511155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.511181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.511366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.511437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.511651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.511705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.511984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.512036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.512217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.512243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.512597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.512655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.512949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.513001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.513159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.513187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.513323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.513350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.513613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.513666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.513907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.513957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.514246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.514309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.514461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.000 [2024-07-12 16:02:57.514487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.000 qpair failed and we were unable to recover it. 00:26:28.000 [2024-07-12 16:02:57.514616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.514644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.515013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.515072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.515252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.515278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.515442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.515470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.515762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.515822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.516076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.516128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.516281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.516306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.516460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.516486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.516745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.516798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.517101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.517154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.517417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.517444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.517634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.517661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.517898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.517956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.518159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.518198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.518382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.518452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.518700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.518750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.519065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.519127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.519287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.519312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.519449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.519474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.519829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.519888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.520251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.520303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.520464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.520490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.520755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.520807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.521128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.521373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.521399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.521657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.521707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.521974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.522027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.522186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.522212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.522350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.522377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.522657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.522713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.523016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.523074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.523312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.523347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.523506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.523530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.523844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.523913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.524238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.524290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.524435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.524460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.524755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.524820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.525102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.525157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.525323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.001 [2024-07-12 16:02:57.525348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.001 qpair failed and we were unable to recover it. 00:26:28.001 [2024-07-12 16:02:57.525493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.525522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.525803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.525856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.526150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.526201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.526371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.526397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.526551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.526576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.526804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.526855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.527041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.527067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.527223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.527248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.527543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.527604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.527885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.527936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.528219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.528280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.528449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.528475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.528681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.528733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.529061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.529116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.529255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.529282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.529458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.529485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.529733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.529783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.530108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.530161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.530350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.530376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.530511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.530536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.530814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.530871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.531138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.531190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.531348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.531373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.531709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.531769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.532087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.532142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.532297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.532328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.532488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.532514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.532801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.532862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.533130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.533182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.533337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.533363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.533583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.533609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.533933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.533990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.534150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.534176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.534503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.534558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.534844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.534897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.535201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.535255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.535414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.535440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.535704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.535755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.536112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.536168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.536398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.536425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.536714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.536766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.002 qpair failed and we were unable to recover it. 00:26:28.002 [2024-07-12 16:02:57.537086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.002 [2024-07-12 16:02:57.537142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.537307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.537340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.537520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.537546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.537849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.537916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.538218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.538279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.538468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.538493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.538725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.538777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.539077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.539138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.539322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.539348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.539477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.539502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.539746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.539798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.540112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.540178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.540339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.540365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.540524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.540550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.540838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.540896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.541213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.541259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.541421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.541446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.541686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.541737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.542069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.542124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.542279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.542305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.542480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.542505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.542777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.542829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.543146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.543209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.543398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.543424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.543722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.543781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.544003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.544050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.544209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.544234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.544474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.544530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.544853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.544916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.545201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.545252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.545483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.545509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.545825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.545885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.546201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.546258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.546469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.546496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.546749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.546803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.546964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.547049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.547183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.547208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.547541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.547596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.547892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.547952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.548260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.548332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.548506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.548531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.003 [2024-07-12 16:02:57.548789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.003 [2024-07-12 16:02:57.548839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.003 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.549151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.549212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.549403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.549429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.549689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.549738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.549983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.550037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.550170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.550195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.550375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.550434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.550690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.550743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.550997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.551050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.551227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.551252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.551405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.551431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.551704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.551761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.552059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.552118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.552224] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:26:28.004 [2024-07-12 16:02:57.552280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.552305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 wit[2024-07-12 16:02:57.552306] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:h addr=10.0.0.2, port=4420 00:26:28.004 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.552453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.552479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.552733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.552783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.553055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.553104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.553259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.553286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.553430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.553456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.553712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.553766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.554020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.554069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.554192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.554218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.554355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.554382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.554686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.554740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.555062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.555112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.555288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.555313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.555473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.555498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.555713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.555772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.556087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.556134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.556310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.556342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.556507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.556533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.556801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.556852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.557070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.557127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.557280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.557306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.557448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.557473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.557766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.557818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.558066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.558119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.558285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.558310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.558478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.558503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.558793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.558848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.559160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.004 [2024-07-12 16:02:57.559220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.004 qpair failed and we were unable to recover it. 00:26:28.004 [2024-07-12 16:02:57.559377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.559403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.559670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.559723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.560038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.560092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.560246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.560272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.560435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.560461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.560676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.560730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.561002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.561061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.561214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.561239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.561418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.561445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.561684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.561739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.562037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.562090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.562253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.562279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.562576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.562634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.562919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.562967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.563295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.563366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.563505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.563530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.563788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.563846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.564130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.564183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.564344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.564370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.564606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.564661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.564898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.564957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.565247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.565299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.565495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.565520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.565764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.565816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.566037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.566093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.566251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.566280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.566420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.566446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.566742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.566804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.567116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.567174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.567437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.567463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.567720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.567772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.568051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.568104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.568262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.568288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.568448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.568474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.568777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.005 qpair failed and we were unable to recover it. 00:26:28.005 [2024-07-12 16:02:57.569151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.005 [2024-07-12 16:02:57.569233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.569386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.569413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.569685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.569738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.570028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.570085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.570264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.570289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.570448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.570474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.570737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.570787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.571076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.571270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.571295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.571434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.571459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.571726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.571780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.572092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.572153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.572285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.572311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.572475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.572500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.572739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.572792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.573055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.573113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.573270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.573295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.573488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.573518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.573721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.573783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.574024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.574075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.574252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.574277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.574439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.574464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.574718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.574771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.575032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.575076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.575229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.575254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.575384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.575409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.575671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.575721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.576059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.576106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.576284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.576310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.576497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.576523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.576709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.576762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.577022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.577076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.577211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.577237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.577411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.577436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.577629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.577688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.577925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.577978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.578153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.578178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.578307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.578339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.578558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.578611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.578929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.578984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.579278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.579359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.579540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.006 [2024-07-12 16:02:57.579565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.006 qpair failed and we were unable to recover it. 00:26:28.006 [2024-07-12 16:02:57.579865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.579932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.580234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.580292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.580482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.580508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.580712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.580764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.580905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.580932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.581196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.581259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.581442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.581468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.581657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.581705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.581962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.582015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.582195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.582220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.582493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.582549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.582825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.582874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.583116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.583170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.583403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.583455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.583727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.583783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.584083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.584142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.584324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.584350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.584509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.584534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.584777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.584830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.585116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.585176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.585391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.585417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.585649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.585698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.585920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.585968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.586239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.586298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.586469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.586494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.586718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.586780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.587062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.587113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.587285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.587310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 witEAL: No free 2048 kB hugepages reported on node 1 00:26:28.007 h addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.587477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.587502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.587772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.587830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.588083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.588132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.588329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.588355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.588535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.588561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.588756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.588809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.589064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.589115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.589268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.589293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.589436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.589462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.589769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.589821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.590096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.590144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.590298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.590332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.590526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.007 [2024-07-12 16:02:57.590576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.007 qpair failed and we were unable to recover it. 00:26:28.007 [2024-07-12 16:02:57.590841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.590884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.591042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.591067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.591210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.591235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.591366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.591392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.591541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.591567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.591726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.591752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.591878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.591905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.592085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.592111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.592271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.592296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.592431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.592457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.592612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.592638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.592813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.592838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.592997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.593022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.593200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.593225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.593392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.593418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.593547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.593573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.593731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.593757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.593880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.593905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.594085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.594110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.594261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.594286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.594420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.594447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.594575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.594601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.594756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.594781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.594955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.594980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.595145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.595171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.595328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.595354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.595515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.595541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.595717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.595742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.595868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.595894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.596075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.596105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.596262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.596288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.596479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.596506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.596641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.596666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.596818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.596843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.597023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.597049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.597283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.597308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.597443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.597470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.597633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.597658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.597806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.597831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.597964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.597989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.598142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.008 [2024-07-12 16:02:57.598167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.008 qpair failed and we were unable to recover it. 00:26:28.008 [2024-07-12 16:02:57.598289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.598320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.598477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.598502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.598639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.598664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.598896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.598922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.599073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.599099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.599281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.599307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.599438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.599463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.599638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.599663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.599813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.599839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.599997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.600022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.600177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.600202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.600338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.600364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.600531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.600557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.600711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.600736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.600888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.600913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.601066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.601096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.601245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.601270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.601429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.601455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.601608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.601634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.601802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.601827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.601995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.602020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.602198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.602224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.602358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.602383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.602520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.602545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.602707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.602733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.602910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.602936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.603058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.603084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.603238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.603263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.603431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.603456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.603597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.603622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.603778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.603803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.603959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.603984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.604162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.604188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.604368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.604394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.604525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.604550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.009 [2024-07-12 16:02:57.604714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.009 [2024-07-12 16:02:57.604740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.009 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.604864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.604890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.605046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.605072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.605210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.605236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.605394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.605420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.605571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.605597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.605778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.605803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.605962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.605991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.606121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.606146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.606295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.606327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.606480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.606506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.606666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.606692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.606847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.606872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.607028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.607053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.607204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.607230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.607388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.607415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.607563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.607588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.607756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.607781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.607933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.607958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.608107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.608131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.608269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.608295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.608475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.608502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.608661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.608687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.608864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.608889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.609068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.609093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.609219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.609243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.609435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.609462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.609616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.609641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.609793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.609818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.609942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.609967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.610126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.610151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.610303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.610334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.610505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.610530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.610707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.610731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.610908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.610933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.611090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.611115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.611293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.611335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.611467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.611492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.611671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.611697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.611851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.611876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.612030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.612055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.010 qpair failed and we were unable to recover it. 00:26:28.010 [2024-07-12 16:02:57.612210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.010 [2024-07-12 16:02:57.612236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.612371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.612397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.612556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.612581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.612738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.612763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.612910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.612935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.613065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.613090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.613230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.613255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.613424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.613454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.613571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.613596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.613737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.613763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.613940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.613965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.614097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.614122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.614298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.614329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.614461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.614487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.614605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.614630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.614765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.614791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.614937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.614962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.615101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.615128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.615306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.615338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.615497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.615523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.615670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.615696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.615851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.615877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.616024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.616049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.616177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.616203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.616355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.616381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.616545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.616572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.616761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.616787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.616937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.616962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.617093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.617118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.617284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.617311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.617475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.617500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.617621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.617647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.617774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.617800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.617955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.617981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.618100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.618129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.618267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.618293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.618449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.618475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.618606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.618632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.618764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.618789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.618937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.618962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.011 [2024-07-12 16:02:57.619119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.011 [2024-07-12 16:02:57.619144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.011 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.619329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.619355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.619519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.619545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.619676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.619701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.619852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.619876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.620002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.620028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.620191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.620216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.620370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.620396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.620556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.620581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.620734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.620759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.620913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.620938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.621061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.621087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.621263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.621288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.621431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.621458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.621611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.621637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.621653] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.012 [2024-07-12 16:02:57.621789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.621815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.621969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.621995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.622143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.622169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.622295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.622326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.622489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.622515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.622697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.622722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.622880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.622906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.623033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.623058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.623242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.623269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.623424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.623450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.623608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.623634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.623801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.623827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.623979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.624004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.624134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.624159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.624312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.624344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.624483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.624509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.624663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.624687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.624837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.624862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.625008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.625034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.625186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.625211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.625384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.625411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.625566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.625592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.625768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.625794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.625927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.625952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.626118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.626143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.626324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.626350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.012 qpair failed and we were unable to recover it. 00:26:28.012 [2024-07-12 16:02:57.626512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.012 [2024-07-12 16:02:57.626538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.626667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.626692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.626870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.626896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.627025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.627051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.627219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.627245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.627383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.627410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.627577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.627602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.627730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.627759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.627921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.627947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.628075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.628100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.628280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.628305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.628481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.628507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.628662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.628688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.628821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.628846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.628979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.629005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.629133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.629159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.629340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.629366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.629551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.629577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.629736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.629761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.629897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.629923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.630078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.630104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.630259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.630285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.630459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.630484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.630670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.630696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.630854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.630880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.631004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.631029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.631180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.631205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.631362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.631387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.631581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.631607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.631728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.631755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.631888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.631913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.632035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.632060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.632229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.632255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.632436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.632462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.632599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.632629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.013 [2024-07-12 16:02:57.632778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.013 [2024-07-12 16:02:57.632803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.013 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.633040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.633066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.633196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.633222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.633360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.633387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.633522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.633547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.633683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.633709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.633838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.633863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.633990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.634016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.634195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.634220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.634378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.634405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.634564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.634589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.634760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.634785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.634954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.634980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.635139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.635165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.635293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.635324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.635459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.635484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.635609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.635634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.635790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.635816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.635947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.635972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.636104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.636129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.636278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.636303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.636495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.636521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.636677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.636702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.636858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.636883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.637064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.637088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.637236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.637262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.637421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.637453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.637578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.637603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.637754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.637780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.637942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.637968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.638095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.638121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.638274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.638300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.638436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.638461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.638622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.638648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.638820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.638845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.639023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.639048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.639206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.639231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.639410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.639437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.639562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.639588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.639717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.639742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.639924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.639949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.640135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.014 [2024-07-12 16:02:57.640161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.014 qpair failed and we were unable to recover it. 00:26:28.014 [2024-07-12 16:02:57.640339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.640367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.640525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.640550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.640709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.640734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.640887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.640913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.641042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.641067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.641222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.641247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.641402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.641429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.641580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.641605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.641740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.641766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.641944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.641970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.642114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.642139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.642294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.642325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.642511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.642536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.642684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.642709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.642863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.642888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.643040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.643066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.643224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.643249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.643375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.643400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.643553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.643578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.643730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.643755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.643884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.643910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.644061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.644086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.644233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.644258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.644414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.644440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.644569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.644594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.644749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.644778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.644909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.644934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.645092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.645118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.645303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.645336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.645469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.645495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.645680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.645705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.645835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.645863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.646018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.646043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.646195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.646220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.646370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.646396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.646552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.646578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.646724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.646749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.646882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.646907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.647059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.647085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.647245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.647271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.647437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.015 [2024-07-12 16:02:57.647463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.015 qpair failed and we were unable to recover it. 00:26:28.015 [2024-07-12 16:02:57.647588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.647613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.647741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.647766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.647915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.647941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.648099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.648124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.648288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.648312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.648489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.648515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.648667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.648693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.648823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.648848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.648997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.649022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.649179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.649206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.649367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.649393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.649525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.649555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.649714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.649739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.649895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.649920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.650073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.650099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.650252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.650277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.650442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.650467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.650601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.650627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.650763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.650789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.650945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.650970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.651147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.651173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.651303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.651337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.651460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.651486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.651613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.651638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.651815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.651840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.651999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.652025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.652174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.652199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.652353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.652380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.652532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.652557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.652740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.652765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.652918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.652943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.653075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.653100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.653254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.653279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.653421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.653447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.653627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.653652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.653827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.653852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.653999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.654025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.654203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.654228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.654351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.654382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.654545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.654571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.654697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.654723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.016 [2024-07-12 16:02:57.654879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.016 [2024-07-12 16:02:57.654905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.016 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.655040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.655067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.655234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.655260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.655395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.655422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.655599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.655624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.655779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.655804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.655965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.655991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.656139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.656165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.656293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.656325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.656492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.656518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.656671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.656696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.656831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.656856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.657023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.657049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.657197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.657223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.657385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.657412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.657541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.657566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.657716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.657741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.657906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.657932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.658098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.658124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.658302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.658333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.658487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.658512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.658670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.658696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.658844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.658869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.658987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.659012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.659177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.659202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.659364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.659390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.659541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.659566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.659721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.659747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.659880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.659906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.660034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.660060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.660237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.660262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.660391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.660417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.660547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.660573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.660726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.660752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.660892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.660917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.661052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.661082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.661209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.661235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.661405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.661433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.661621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.661647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.661770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.661797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.661933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.661959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.662109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.017 [2024-07-12 16:02:57.662135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.017 qpair failed and we were unable to recover it. 00:26:28.017 [2024-07-12 16:02:57.662285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.662310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.662474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.662499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.662624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.662649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.662805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.662831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.662988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.663013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.663180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.663205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.663368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.663394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.663531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.663559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.663737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.663763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.663915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.663940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.664110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.664136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.664306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.664338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.664484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.664510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.664635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.664660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.664813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.664838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.664967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.664993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.665158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.665183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.665347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.665374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.665546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.665571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.665695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.665721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.665890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.665916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.666048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.666073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.666209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.666236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.666373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.666403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.666542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.666568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.666702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.666727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.666902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.666927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.667080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.667106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.667225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.667251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.667387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.667414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.667541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.018 [2024-07-12 16:02:57.667567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.018 qpair failed and we were unable to recover it. 00:26:28.018 [2024-07-12 16:02:57.667801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.667827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.667959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.667984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.668136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.668162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.668286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.668313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.668564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.668591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.668769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.668795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.668949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.668975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.669106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.669131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.669284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.669311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.669477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.669503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.669657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.669683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.669833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.669858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.670012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.670039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.670212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.670237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.670391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.670417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.670565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.670591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.670749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.670776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.670931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.670956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.671078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.671103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.671257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.671288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.671454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.671480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.671633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.671658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.671784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.671809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.671953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.671978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.672139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.672164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.672294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.672344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.672579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.672606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.672793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.672819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.672953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.672978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.673153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.673179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.673333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.673359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.673511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.673536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.673720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.673746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.673903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.673929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.674087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.674112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.674232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.674258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.674387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.674413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.674596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.674622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.674780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.674806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.674930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.674956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.675086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.675113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.019 [2024-07-12 16:02:57.675236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.019 [2024-07-12 16:02:57.675262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.019 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.675377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.675402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.675578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.675603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.675758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.675784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.675958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.675983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.676215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.676241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.676398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.676423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.676573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.676599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.676751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.676778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.676940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.676965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.677117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.677142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.677298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.677330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.677481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.677506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.677662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.677687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.677813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.677838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.677956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.677982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.678136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.678162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.678386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.678412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.678568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.678593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.678751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.678778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.678906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.678932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.679109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.679135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.679263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.679288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.679427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.679453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.679579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.679605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.679733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.679758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.679913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.679938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.680115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.680141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.680275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.680301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.680536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.680562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.680697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.680723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.680886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.680912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.681042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.681067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.681208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.681235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.681417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.681444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.681584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.681611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.681744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.681770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.681898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.681924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.682086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.682112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.682267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.682294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.682465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.020 [2024-07-12 16:02:57.682491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.020 qpair failed and we were unable to recover it. 00:26:28.020 [2024-07-12 16:02:57.682627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.682652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.682833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.682858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.682987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.683014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.683161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.683187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.683338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.683364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.683491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.683521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.683703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.683729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.683895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.683920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.684046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.684071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.684224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.684249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.684390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.684417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.684569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.684594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.684751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.684776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.684921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.684947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.685076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.685102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.685232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.685258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.685416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.685442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.685568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.685594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.685722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.685748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.685902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.685928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.686065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.686091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.686225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.686250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.686405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.686431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.686582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.686608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.686811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.686837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.687007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.687033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.687190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.687215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.687384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.687410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.687533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.687558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.687692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.687717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.687879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.687905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.688059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.688084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.688235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.688265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.688444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.688470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.688598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.688624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.688799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.688824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.688952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.688979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.689210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.689235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.689413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.689440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.689597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.689623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.021 qpair failed and we were unable to recover it. 00:26:28.021 [2024-07-12 16:02:57.689809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.021 [2024-07-12 16:02:57.689835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.689966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.689991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.690149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.690175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.690355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.690381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.690536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.690563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.690719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.690745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.690905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.690931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.691088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.691114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.691348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.691375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.691527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.691552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.691713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.691739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.691890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.691915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.692066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.692091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.692246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.692271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.692435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.692461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.692585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.692611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.692762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.692787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.692936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.692961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.693195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.693221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.693372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.693401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.693535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.693560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.693689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.693714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.693839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.693865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.693997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.694023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.694255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.694280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.694438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.694464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.694595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.694620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.694853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.694879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.695028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.695053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.695183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.695208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.695343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.695369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.695528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.695552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.695701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.695727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.695915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.695941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.696103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.696129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.696361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.022 [2024-07-12 16:02:57.696387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.022 qpair failed and we were unable to recover it. 00:26:28.022 [2024-07-12 16:02:57.696525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.696550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.696704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.696730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.696964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.696990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.697142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.697167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.697346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.697372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.697499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.697524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.697683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.697709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.697887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.697913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.698064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.698089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.698268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.698294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.698435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.698461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.698603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.698629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.698786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.698821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.698996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.699026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.699154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.699180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.699321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.699348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.699507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.699532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.699662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.699687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.699862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.699888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.700016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.700042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.700200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.700225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.700387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.700413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.700552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.700579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.700709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.700747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.700803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c693f0 (9): Bad file descriptor 00:26:28.023 [2024-07-12 16:02:57.701035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.701077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.701262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.701292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.701442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.701470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.701635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.701662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.701832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.701859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.701989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.702016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.023 [2024-07-12 16:02:57.702149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.023 [2024-07-12 16:02:57.702176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.023 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.702383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.702430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.702590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.702628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.702775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.702802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.702933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.702960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.703135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.703162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.703293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.703326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.703510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.703537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.703693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.703721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.703870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.703895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.704026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.704055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.704218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.704244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.704376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.704403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.704529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.704555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.704711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.704738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.704903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.704929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.705059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.705085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.705245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.705272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.705453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.705483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.705635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.705661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.705789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.705819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.705973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.705999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.706124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.706150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.706288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.706313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.706453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.706479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.706634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.706659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.706813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.706839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.706993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.707018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.707145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.707170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.707332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.707359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.707512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.707537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.707671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.707696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.707854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.707879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.708042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.708071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.708235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.708261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.708397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.708424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.708579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.708605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.708749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.708775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.708905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.708932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.709084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.306 [2024-07-12 16:02:57.709111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.306 qpair failed and we were unable to recover it. 00:26:28.306 [2024-07-12 16:02:57.709290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.709321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.709478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.709505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.709659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.709687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.709847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.709874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.710030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.710057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.710235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.710262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.710418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.710445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.710596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.710627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.710807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.710833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.710963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.710989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.711114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.711141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.711292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.711323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.711482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.711508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.711641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.711668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.711831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.711858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.712017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.712042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.712176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.712202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.712346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.712373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.712530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.712557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.712688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.712715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.712848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.712874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.713031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.713057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.713197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.713225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.713384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.713411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.713535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.713562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.713703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.713729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.713860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.713886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.714010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.714037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.714201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.714227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.714384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.714410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.714529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.714555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.714691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.714717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.714871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.714897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.715022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.715048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.715212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.715239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.715369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.715396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.715556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.715581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.715702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.715729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.715858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.715885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.716018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.716045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.307 [2024-07-12 16:02:57.716228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.307 [2024-07-12 16:02:57.716254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.307 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.716392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.716419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.716577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.716603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.716757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.716784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.716935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.716961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.717120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.717146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.717308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.717339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.717485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.717515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.717642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.717668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.717795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.717823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.717972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.717998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.718127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.718154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.718288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.718328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.718488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.718514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.718681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.718708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.718867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.718893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.719024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.719051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.719189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.719215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.719378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.719404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.719539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.719564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.719708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.719733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.719899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.719925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.720063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.720089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.720239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.720266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.720421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.720448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.720600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.720626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.720757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.720784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.720925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.720951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.721110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.721137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.721304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.721338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.721480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.721506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.721642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.721668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.721784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.721810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.721972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.721998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.722147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.722186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.722436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.722464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.722628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.722654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.722785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.722811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.722936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.722964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.723096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.723121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.723306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.723348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.308 [2024-07-12 16:02:57.723468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.308 [2024-07-12 16:02:57.723494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.308 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.723639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.723665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.723847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.723872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.724007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.724033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.724192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.724219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.724357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.724383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.724543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.724569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.724727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.724753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.724893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.724920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.725048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.725081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.725218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.725245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.725420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.725460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.725602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.725629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.725759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.725786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.725917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.725944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.726093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.726118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.726255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.726282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.726445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.726472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.726611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.726637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.726772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.726799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.726941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.726969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.727138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.727164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.727327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.727354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.727491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.727516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.727672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.727697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.727826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.727852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.728012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.728038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.728162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.728190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.728359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.728386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.728514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.728539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.728677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.728702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.728836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.728865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.729037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.729062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.729219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.729245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.729380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.729407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.729564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.729590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.729760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.729785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.729935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.729961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.730122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.730148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.730274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.730300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.730475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.730501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.309 [2024-07-12 16:02:57.730625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.309 [2024-07-12 16:02:57.730650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.309 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.730777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.730803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.730939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.730964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.731127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.731152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.731309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.731346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.731481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.731510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.731687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.731726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.731895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.731924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.732065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.732093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.732215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.732241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.732375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.732402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.732534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.732561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.732716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.732743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.732957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.732983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.733159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.733185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.733343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.733369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.733508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.733534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.733665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.733692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.733822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.733847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.733983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.734014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.734184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.734212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.734367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.734394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.734529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.734555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.734686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.734714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.734845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.734872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.735006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.735032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.735194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.735220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.735370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.735409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.735545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.735573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.735705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.735735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.735891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.735919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.736042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.736068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.736210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.736236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.310 [2024-07-12 16:02:57.736416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.310 [2024-07-12 16:02:57.736444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.310 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.736569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.736595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.736725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.736752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.736905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.736931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 [2024-07-12 16:02:57.736928] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.736971] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.311 [2024-07-12 16:02:57.736986] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.311 [2024-07-12 16:02:57.736998] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.311 [2024-07-12 16:02:57.737009] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.311 [2024-07-12 16:02:57.737075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:28.311 [2024-07-12 16:02:57.737123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.737152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.737325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.737353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.737368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:28.311 [2024-07-12 16:02:57.737418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:28.311 [2024-07-12 16:02:57.737421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:28.311 [2024-07-12 16:02:57.737508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.737536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.737682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.737708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.737929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.737955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.738077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.738103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.738231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.738257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.738437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.738464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.738599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.738627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.738781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.738806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.738935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.738961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.739084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.739110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.739277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.739303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.739449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.739475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.739624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.739650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.739803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.739829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.739962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.739989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.740149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.740176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.740332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.740358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.740499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.740526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.740682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.740707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.740858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.740884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.741109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.741134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.741264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.741289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.741430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.741457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.741590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.741617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.741746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.741772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.741918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.741944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.742082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.742108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.742264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.742290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.742432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.742459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.742611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.742637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.742761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.742787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.742912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.742937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.311 qpair failed and we were unable to recover it. 00:26:28.311 [2024-07-12 16:02:57.743143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.311 [2024-07-12 16:02:57.743169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.743324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.743350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.743489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.743515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.743647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.743673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.743802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.743829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.743968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.743994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.744126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.744152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.744276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.744302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.744448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.744474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.744603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.744629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.744763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.744789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.744961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.744987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.745133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.745164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.745288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.745320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.745460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.745486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.745619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.745645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.745779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.745806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.745937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.745964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.746090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.746117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.746243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.746269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.746393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.746420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.746552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.746579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.746709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.746735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.746864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.746890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.747026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.747052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.747203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.747228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.747365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.747392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.747520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.747547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.747698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.747724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.747854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.747879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.748013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.748039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.748162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.748188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.748319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.748346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.748487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.748513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.748648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.748677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.748807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.748832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.748972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.748998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.749216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.749242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.749377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.749403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.749566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.749592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.312 [2024-07-12 16:02:57.749734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.312 [2024-07-12 16:02:57.749760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.312 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.749892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.749918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.750053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.750079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.750202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.750228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.750357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.750383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.750531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.750559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.750689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.750715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.750857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.750882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.751019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.751045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.751194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.751235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.751373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.751403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.751538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.751568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.751701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.751736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.751862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.751888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.752067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.752093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.752225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.752251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.752424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.752450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.752585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.752621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.752784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.752811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.752938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.752966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.753103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.753130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.753275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.753301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.753455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.753482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.753634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.753660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.753785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.753810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.753935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.753961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.754101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.754127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.754250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.754275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.754405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.754432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.754571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.754597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.754826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.754857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.755026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.755060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.755217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.755245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.755396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.755423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.755560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.755587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.755739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.755766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.755918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.755943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.756080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.756108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.756242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.756269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.756429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.756470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.756612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.756648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.756782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.313 [2024-07-12 16:02:57.756809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.313 qpair failed and we were unable to recover it. 00:26:28.313 [2024-07-12 16:02:57.756958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.756984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.757114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.757140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.757282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.757310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.757453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.757480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.757624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.757650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.757776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.757802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.757928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.757953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.758090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.758116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.758253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.758279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.758442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.758468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.758592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.758625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.758794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.758820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.758957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.758984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.759114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.759142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.759274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.759301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.759462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.759489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.759616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.759642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.759771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.759799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.759919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.759944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.760077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.760102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.760240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.760266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.760419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.760446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.760583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.760608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.760746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.760772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.760924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.760955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.761079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.761104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.761245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.761271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.761404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.761431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.761556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.761581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.761717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.761743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.761870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.761895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.762044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.762070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.762220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.762247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.762389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.762416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.762546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.762571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.762702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.314 [2024-07-12 16:02:57.762730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.314 qpair failed and we were unable to recover it. 00:26:28.314 [2024-07-12 16:02:57.762867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.762893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.763021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.763048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.763187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.763212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.763387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.763413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.763549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.763574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.763713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.763739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.763870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.763896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.764027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.764053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.764190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.764216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.764357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.764384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.764514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.764539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.764671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.764698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.764821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.764846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.764979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.765004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.765155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.765180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.765331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.765364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.765501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.765527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.765667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.765693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.765822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.765847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.766057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.766083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.766214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.766240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.766385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.766411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.766534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.766560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.766716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.766742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.766871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.766896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.767024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.767050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.767185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.767210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.767365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.767391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.767550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.767576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.767711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.767736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.767865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.767891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.768015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.768041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.768206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.768232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.768380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.768421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.768549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.768576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.768701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.768728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.768880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.768906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.769059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.769084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.769226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.769253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.769473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.769500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.769628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.315 [2024-07-12 16:02:57.769653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.315 qpair failed and we were unable to recover it. 00:26:28.315 [2024-07-12 16:02:57.769777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.769804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.769982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.770008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.770237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.770263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.770405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.770432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.770560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.770586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.770734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.770759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.770924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.770951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.771120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.771146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.771264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.771291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.771455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.771480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.771606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.771634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.771768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.771795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.771943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.771969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.772087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.772112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.772247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.772273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.772428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.772454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.772591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.772620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.772751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.772777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.772944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.772971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.773125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.773151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.773283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.773309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.773455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.773481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.773614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.773639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.773792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.773818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.773946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.773973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.774110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.774136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.774293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.774324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.774461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.774489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.774615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.774648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.774799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.774825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.774954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.774980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.775109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.775136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.775274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.775300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.775454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.775481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.775618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.775643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.775782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.775809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.775934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.775961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.776106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.776132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.776271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.776297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.776429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.776455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.776586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.316 [2024-07-12 16:02:57.776612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.316 qpair failed and we were unable to recover it. 00:26:28.316 [2024-07-12 16:02:57.776767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.776794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.776927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.776953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.777114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.777141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.777343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.777370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.777505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.777532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.777704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.777730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.777868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.777902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.778029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.778055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.778194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.778222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.778360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.778387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.778535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.778560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.778719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.778744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.778869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.778894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.779027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.779053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.779203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.779238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.779371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.779397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.779616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.779642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.779766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.779791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.779937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.779962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.780094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.780120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.780264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.780290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.780416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.780442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.780589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.780615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.780770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.780797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.780972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.780998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.781153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.781178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.781295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.781328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.781467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.781493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.781634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.781660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.781791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.781818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.781973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.781998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.782130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.782155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.782327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.782367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.782506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.782533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.782670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.782697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.782828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.782854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.782973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.782998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.783124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.783150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.783302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.783333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.783458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.783484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.783617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.783644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.317 qpair failed and we were unable to recover it. 00:26:28.317 [2024-07-12 16:02:57.783771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.317 [2024-07-12 16:02:57.783802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.783947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.783974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.784110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.784137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.784274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.784300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.784439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.784466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.784625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.784651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.784781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.784942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.784969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.785112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.785139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.785274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.785300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.785440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.785468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.785637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.785664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.785800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.785826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.785982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.786008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.786148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.786174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.786303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.786336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.786496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.786521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.786679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.786706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.786840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.786867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.787019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.787045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.787175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.787201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.787323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.787350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.787480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.787507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.787647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.787673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.787800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.787827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.787988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.788014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.788154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.788180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.788327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.788354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.788474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.788501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.788629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.788656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.788780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.788806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.788930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.788957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.789154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.789181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.789349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.789376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.789501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.789527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.789675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.318 [2024-07-12 16:02:57.789701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.318 qpair failed and we were unable to recover it. 00:26:28.318 [2024-07-12 16:02:57.789830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.789856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.790010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.790035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.790242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.790269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.790395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.790422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.790568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.790603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.790745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.790771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.790923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.790949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.791074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.791100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.791231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.791257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.791387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.791413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.791547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.791574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.791718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.791745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.791876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.791902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.792033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.792059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.792195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.792221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.792367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.792393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.792521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.792547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.792702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.792728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.792932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.792958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.793087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.793113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.793256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.793283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.793447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.793474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.793605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.793632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.793770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.793796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.793927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.793953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.794087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.794113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.794237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.794263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.794403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.794430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.794559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.794584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.794801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.794827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.794956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.794983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.795119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.795145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.795266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.795292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.795428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.795457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.795586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.795612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.795742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.795769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.795929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.795956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.796087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.796113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.796243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.796270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.796400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.796427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.319 [2024-07-12 16:02:57.796554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.319 [2024-07-12 16:02:57.796579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.319 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.796734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.796760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.796882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.796907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.797060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.797086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.797222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.797252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.797428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.797470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.797628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.797669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.797824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.797852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.797978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.798004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.798133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.798158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.798286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.798313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.798488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.798515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.798642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.798668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.798810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.798837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.798968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.798994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.799119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.799145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.799293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.799327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.799465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.799493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.799650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.799677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.799831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.799858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.799980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.800006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.800130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.800155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.800299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.800336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.800467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.800493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.800643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.800669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.800805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.800832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.800988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.801013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.801140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.801166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.801294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.801328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.801482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.801509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.801633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.801660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.801882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.801908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.802036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.802063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.802186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.802212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.802343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.802369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.802556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.802581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.802713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.802738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.802867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.802893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.803049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.803075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.803216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.803241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.803397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.320 [2024-07-12 16:02:57.803423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.320 qpair failed and we were unable to recover it. 00:26:28.320 [2024-07-12 16:02:57.803583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.803608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.803747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.803786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.803931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.803960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.804092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.804124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.804281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.804307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.804458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.804484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.804605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.804630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.804763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.804923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.804949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.805084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.805110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.805333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.805359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.805500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.805527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.805649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.805674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.805807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.805834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.805960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.805987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.806161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.806189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.806328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.806354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.806511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.806538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.806693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.806721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.806850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.806877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.807010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.807035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.807174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.807200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.807337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.807367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.807500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.807525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.807659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.807685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.807829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.807865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.807999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.808025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.808170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.808210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.808379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.808408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.808547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.808575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.808738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.808770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.808900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.808927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.809078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.809114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.809252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.809281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.809425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.809452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.809603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.809629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.809750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.809778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.809919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.809944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.810083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.810109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.810240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.810267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.810394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.810423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.321 qpair failed and we were unable to recover it. 00:26:28.321 [2024-07-12 16:02:57.810554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.321 [2024-07-12 16:02:57.810581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.810713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.810739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.810869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.810894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.811060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.811086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.811212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.811240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.811388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.811429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.811589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.811617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.811773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.811799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.811921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.811946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.812071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.812097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.812246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.812272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.812420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.812447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.812578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.812604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.812734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.812760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.812889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.812915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.813068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.813094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.813220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.813251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.813375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.813401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.813529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.813555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.813724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.813750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.813881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.813909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.814051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.814077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.814236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.814262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.814389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.814416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.814557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.814586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.814723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.814749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.814883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.814909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.815064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.815092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.815221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.815247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.815377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.815404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.815543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.815569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.815732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.815758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.815881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.815908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.816037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.816064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.816195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.816220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.816359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.816386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.816525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.816551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.816699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.816725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.816846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.816872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.817024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.817051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.817211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.817237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.322 [2024-07-12 16:02:57.817371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.322 [2024-07-12 16:02:57.817398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.322 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.817559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.817585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.817719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.817750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.817898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.817924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.818044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.818069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.818227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.818252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.818381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.818408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.818541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.818566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.818687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.818712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.818841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.818867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.819010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.819036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.819161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.819188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.819396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.819422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.819552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.819578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.819718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.819744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.819871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.819897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.820032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.820059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.820224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.820250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.820391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.820417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.820546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.820574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.820727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.820768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.820917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.820946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.821115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.821142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.821266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.821293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.821446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.821473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.821595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.821621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.821746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.821771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.821914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.821940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.822061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.822087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.822212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.822244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.822377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.822404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.822534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.822562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.822698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.822725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.822865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.822892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.823010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.823037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.823162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.823188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.323 qpair failed and we were unable to recover it. 00:26:28.323 [2024-07-12 16:02:57.823321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.323 [2024-07-12 16:02:57.823348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.823475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.823501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.823637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.823663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.823808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.823834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.823954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.823980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.824121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.824148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.824368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.824395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.824544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.824570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.824721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.824746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.824935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.824962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.825089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.825114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.825239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.825265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.825399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.825425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.825564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.825589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.825739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.825764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.825905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.825931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.826080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.826106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.826251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.826277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.826417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.826445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.826590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.826616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.826774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.826814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.826959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.826987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.827125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.827154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.827292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.827342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.827503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.827529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.827680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.827706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.827867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.827893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.828025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.828051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.828187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.828213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.828342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.828368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.828498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.828524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.828684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.828724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.828873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.828900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.829038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.829065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.829207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.829235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.829372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.829399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.829524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.829551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.829691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.829717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.829872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.829898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.830027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.830054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.830189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.830216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.324 qpair failed and we were unable to recover it. 00:26:28.324 [2024-07-12 16:02:57.830344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.324 [2024-07-12 16:02:57.830371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.830505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.830531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.830665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.830693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.830817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.830843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.830966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.830993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.831126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.831152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.831294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.831326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.831461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.831488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.831607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.831633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.831753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.831779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.831906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.831932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.832063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.832089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.832224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.832250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.832385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.832412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.832534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.832560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.832696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.832721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.832858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.832884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.833014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.833041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.833189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.833215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.833348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.833387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.833538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.833564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.833703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.833729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.833865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.833891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.834051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.834077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.834213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.834239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.834390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.834417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.834538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.834564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.834692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.834719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.834849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.834876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.835007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.835034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.835169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.835195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.835334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.835360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.835517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.835544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.835695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.835720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.835846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.835872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.836053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.836079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.836208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.836233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.836374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.836402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.836537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.836562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.836691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.836717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.836870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.325 [2024-07-12 16:02:57.836896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.325 qpair failed and we were unable to recover it. 00:26:28.325 [2024-07-12 16:02:57.837036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.837062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.837218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.837244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.837393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.837420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.837550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.837576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.837715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.837740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.837882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.837913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.838062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.838088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.838240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.838267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.838411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.838437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.838564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.838590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.838716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.838742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.838892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.838918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.839049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.839076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.839207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.839233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.839401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.839428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.839553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.839580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.839706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.839732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.839870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.839897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.840032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.840067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.840196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.840222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.840363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.840390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.840515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.840542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.840695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.840721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.840873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.840899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.841070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.841096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.841230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.841256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.841413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.841442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.841590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.841616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.841772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.841798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.841937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.841964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.842096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.842122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.842242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.842271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.842452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.842479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.842604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.842630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.842758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.842784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.842924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.842951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.843088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.843114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.843266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.843292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.843430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.843456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.843596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.843635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.843797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.326 [2024-07-12 16:02:57.843824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.326 qpair failed and we were unable to recover it. 00:26:28.326 [2024-07-12 16:02:57.843963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.843989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.844120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.844146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.844273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.844300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.844461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.844487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.844630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.844658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.844788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.844814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.844975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.845001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.845158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.845184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.845365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.845393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.845528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.845556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.845708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.845734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.845883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.845909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.846060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.846093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.846241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.846267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.846392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.846418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.846582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.846608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.846744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.846771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.846905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.846932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.847104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.847130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.847262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.847288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.847430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.847457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.847594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.847620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.847747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.847775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.847902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.847929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.848065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.848092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.848240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.848267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.848413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.848439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.848596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.848622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.848787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.848813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.848946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.848973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.849121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.849148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.849302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.849334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.849496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.849522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.849659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.849685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.849813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.849838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.327 [2024-07-12 16:02:57.849960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.327 [2024-07-12 16:02:57.849986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.327 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.850147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.850174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.850360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.850386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.850536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.850563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.850705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.850731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.850890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.850915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.851039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.851066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.851222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.851248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.851396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.851423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.851552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.851583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.851738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.851764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.851944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.851970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.852119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.852145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.852274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.852300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.852447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.852473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.852604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.852630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.852758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.852784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.852935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.852962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.853092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.853118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.853257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.853284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.853427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.853454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.853606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.853632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.853760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.853786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.853952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.853978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.854107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.854133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.854258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.854284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.854446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.854473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.854597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.854624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.854761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.854787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.854915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.854941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.855070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.855096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.855244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.855270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.855438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.855478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.855611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.855637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.855793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.855820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.855944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.855970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.856096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.856122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.856265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.856291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.856452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.856478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.856602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.856627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.856753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.856779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.328 qpair failed and we were unable to recover it. 00:26:28.328 [2024-07-12 16:02:57.856901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.328 [2024-07-12 16:02:57.856926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.857077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.857102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.857240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.857266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.857420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.857446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.857567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.857592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.857723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.857749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.857902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.857927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.858047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.858072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.858215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.858246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.858401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.858441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.858576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.858603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.858762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.858789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.858913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.858940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.859119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.859144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.859276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.859302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.859448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.859475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.859634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.859660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.859801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.859827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.859985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.860010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.860150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.860176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.860339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.860366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.860505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.860533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.860666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.860693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.860829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.860854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.861007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.861032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.861167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.861193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.861326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.861353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.861502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.861527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.861660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.861686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.861840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.861866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.862001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.862026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.862174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.862199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.862357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.862383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.862537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.862563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.862703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.862728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.862895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.862925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.863071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.863097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.863220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.863245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.863398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.863424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.863552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.863577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.863707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.863733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.329 [2024-07-12 16:02:57.863856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.329 [2024-07-12 16:02:57.863881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.329 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.864057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.864082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.864209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.864234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.864361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.864388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.864553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.864578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.864716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.864741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.864866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.864892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.865024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.865049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.865212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.865238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.865385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.865412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.865566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.865592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.865723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.865748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.865874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.865899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.866033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.866059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.866188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.866213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.866363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.866390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.866542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.866568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.866700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.866725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.866855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.866880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.867035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.867060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.867202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.867227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.867386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.867413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.867574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.867600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.867716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.867741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.867898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.867924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.868047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.868072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.868200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.868226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.868357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.868384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.868515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.868540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.868717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.868743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.868879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.868905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.869050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.869075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.869234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.869259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.869390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.869416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.869548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.869574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.869739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.869778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.869925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.869953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.870124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.870151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.870274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.870300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.870464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.870490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.870619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.330 [2024-07-12 16:02:57.870646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.330 qpair failed and we were unable to recover it. 00:26:28.330 [2024-07-12 16:02:57.870772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.870797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.870950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.870976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.871114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.871139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.871293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.871328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.871469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.871496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.871660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.871686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.871810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.871836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.872006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.872037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.872166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.872191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.872351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.872377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.872503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.872529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.872664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.872690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.872836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.872861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.872988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.873014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.873145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.873171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.873295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.873327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.873482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.873509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.873640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.873665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.873795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.873822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.873950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.873976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.874151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.874176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.874300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.874333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.874473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.874499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.874655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.874681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.874830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.874855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.874982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.875009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.875165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.875190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.875326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.875352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.875528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.875554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.875682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.875708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.875833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.875859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.875995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.876022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.876175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.876201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.876327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.876354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.876498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.331 [2024-07-12 16:02:57.876524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.331 qpair failed and we were unable to recover it. 00:26:28.331 [2024-07-12 16:02:57.876680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.876706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.876831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.876856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.876991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.877018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.877147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.877173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.877336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.877363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.877523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.877550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.877675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.877701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.877823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.877849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.877979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.878005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.878132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.878159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.878283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.878309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.878458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.878484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.878610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.878641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.878806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.878833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.878989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.879015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.879141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.879167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.879294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.879325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.879482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.879508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.879641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.879667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.879813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.879839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.879968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.879994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.880149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.880174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.880298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.880332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.880493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.880519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.880648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.880674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.880800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.880826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.880956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.880982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.881107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.881133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.881267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.881292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.881438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.881464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.881596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.881622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.881771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.881796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.881930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.881956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.882083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.882109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.882239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.882265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.882395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.882421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.882569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.882595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.882720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.882745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.882922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.882949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.883074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.883100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.332 qpair failed and we were unable to recover it. 00:26:28.332 [2024-07-12 16:02:57.883228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.332 [2024-07-12 16:02:57.883253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.883414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.883441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.883570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.883596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.883743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.883769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.883913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.883939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.884092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.884118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.884256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.884282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.884438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.884465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.884602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.884628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.884766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.884792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.884946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.884972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.885095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.885121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.885274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.885300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.885446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.885473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.885606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.885632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.885782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.885808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.885963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.885989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.886122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.886149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.886300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.886339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.886471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.886497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.886651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.886676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.886800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.886827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.886983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.887009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.887147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.887173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.887297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.887331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.887497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.887523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.887652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.887677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.887803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.887828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.887982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.888008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.888140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.888167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.888328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.888354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.888491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.888517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.888651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.888677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.888814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.888840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.888993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.889020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.889174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.889200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.889331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.889358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.889540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.889566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.889698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.889725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.889852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.889882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.890014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.333 [2024-07-12 16:02:57.890041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.333 qpair failed and we were unable to recover it. 00:26:28.333 [2024-07-12 16:02:57.890167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.890193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.890345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.890371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.890509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.890536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.890659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.890684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.890842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.890868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.891001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.891028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.891165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.891190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.891332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.891358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.891502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.891528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.891677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.891703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.891856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.891882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.892018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.892044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.892181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.892206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.892352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.892378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.892530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.892556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.892682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.892707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.892824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.892849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.893014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.893040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.893190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.893215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.893335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.893362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.893499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.893525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.893646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.893672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.893798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.893824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.893982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.894008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.894146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.894172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.894303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.894342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.894492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.894518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.894668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.894694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.894841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.894866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.895001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.895028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.895155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.895180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.895336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.895362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.895515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.895541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.895694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.895720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.895842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.895869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.896053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.896079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.896231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.896257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.896386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.896413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.896556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.896586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.896743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.896769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.896891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.334 [2024-07-12 16:02:57.896917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.334 qpair failed and we were unable to recover it. 00:26:28.334 [2024-07-12 16:02:57.897048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.897075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.897211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.897237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.897368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.897395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.897529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.897556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.897725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.897751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.897870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.897896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.898035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.898061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.898239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.898264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.898390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.898416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.898726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.898752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.898881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.898908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.899059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.899085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.899230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.899256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.899401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.899428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.899559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.899586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.899746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.899773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.899907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.899933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.900064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.900089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.900245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.900271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.900395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.900421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.900561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.900587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.900745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.900771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.900908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.900934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.901063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.901089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.901236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.901263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.901421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.901448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.901589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.901615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.901743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.901769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.901899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.901926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.902104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.902130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.902279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.902306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.902440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.902466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.902603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.902629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.902761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.902787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.902908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.902933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.903064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.903089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.335 [2024-07-12 16:02:57.903241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.335 [2024-07-12 16:02:57.903267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.335 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.903394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.903424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.903561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.903588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.903719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.903745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.903876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.903902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.904023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.904049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.904175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.904200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.904355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.904381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.904525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.904550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.904680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.904706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.904855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.904881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.905008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.905033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.905198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.905224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.905390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.905417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.905557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.905582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.905722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.905748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.905900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.905926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.906058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.906083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.906209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.906235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.906365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.906390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.906543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.906568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.906699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.906725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.906885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.906910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.907044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.907069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.907203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.907229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.907361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.907389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.907529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.907555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.907689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.907715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.907870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.907896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.908048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.908073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.908203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.908229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.908370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.908397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.908526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.908553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.908686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.908712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.908864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.908890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.909036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.909061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.909241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.909415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.909442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.909575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.909600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.909727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.336 [2024-07-12 16:02:57.909753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.336 qpair failed and we were unable to recover it. 00:26:28.336 [2024-07-12 16:02:57.909901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.909927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.910052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.910096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.910225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.910251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.910371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.910397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.910552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.910577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.910710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.910737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.910888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.910914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.911063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.911089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.911254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.911280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.911410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.911437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.911559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.911584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.911752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.911777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.911901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.911926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.912106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.912131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.912259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.912286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.912443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.912468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.912615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.912641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.912770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.912795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.912916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.912941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.913065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.913091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.913245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.913271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.913408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.913434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.913559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.913586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.913768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.913794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.913922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.913947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.914087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.914112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.914293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.914325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.914453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.914478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.914609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.914636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.914764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.914790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.914925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.914952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.915073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.915099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.915244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.915270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.915432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.915460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.915599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.915625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.915805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.915831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.915960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.915987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.916113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.916140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.916296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.916330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.916477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.916502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.337 qpair failed and we were unable to recover it. 00:26:28.337 [2024-07-12 16:02:57.916653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.337 [2024-07-12 16:02:57.916679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.916843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.916877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.917010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.917036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.917187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.917213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.917341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.917367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.917492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.917519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.917682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.917708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.917838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.917864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.918015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.918041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.918180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.918205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.918337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.918364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.918491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.918517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.918672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.918698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.918829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.918856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.918984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.919010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.919136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.919162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.919328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.919355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.919490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.919516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.919640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.919666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.919800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.919827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.919949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.919975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.920105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.920132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.920256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.920282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.920412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.920438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.920584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.920609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.920741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.920768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.920891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.920916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.921067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.921093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.921267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.921293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.921453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.921480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.921608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.921634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.921759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.921785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.921906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.921932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.922066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.922092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.922245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.922271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.922404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.922430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.922576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.922601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.922763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.922790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.922915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.922940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.923080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.923105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.923231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.338 [2024-07-12 16:02:57.923257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.338 qpair failed and we were unable to recover it. 00:26:28.338 [2024-07-12 16:02:57.923388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.923547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.923573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.923726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.923753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.923874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.923900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.924043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.924068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.924236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.924262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.924395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.924421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.924560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.924586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.924722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.924750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.924874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.924900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.925032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.925058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.925189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.925215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.925348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.925374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.925515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.925541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.925698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.925724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.925847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.925873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.926000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.926026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.926180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.926206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.926360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.926387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.926523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.926549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.926686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.926712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.926842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.926868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.927040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.927066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.927189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.927215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.927337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.927363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.927492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.927517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.927637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.927663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.927789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.927815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.927941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.927967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.928099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.928126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.928251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.928277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.928458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.928483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.928620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.928647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.928805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.928832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.928961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.928986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.929124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.929150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.929280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.929305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.929442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.929468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.929637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.929663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.929803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.929830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.339 qpair failed and we were unable to recover it. 00:26:28.339 [2024-07-12 16:02:57.929964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.339 [2024-07-12 16:02:57.929994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.930120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.930146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.930283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.930309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.930461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.930487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.930629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.930655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.930833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.930858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.930986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.931012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.931139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.931164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.931347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.931373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.931508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.931534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.931670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.931696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.931852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.931878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.932005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.932030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.932163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.932189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.932340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.932367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.932526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.932551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.932677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.932703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.932825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.932850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.932985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.933010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.933135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.933160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.933293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.933323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.933502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.933528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.933682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.933709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.933840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.933865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.933995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.934021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.934173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.934199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.934325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.934351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.934485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.934510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.934665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.934692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.934845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.934870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.935013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.935039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.935204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.935231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.935369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.935395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.935512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.935537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.340 qpair failed and we were unable to recover it. 00:26:28.340 [2024-07-12 16:02:57.935720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.340 [2024-07-12 16:02:57.935746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.935897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.935922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.936047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.936074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.936197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.936224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.936376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.936403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.936528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.936554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.936704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.936734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.936887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.936913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.937036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.937061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.937216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.937242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.937370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.937397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.937536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.937562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.937698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.937725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.937844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.937869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.938027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.938052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.938195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.938222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.938353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.938380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.938519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.938545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.938706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.938732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.938866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.938891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.939027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.939053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.939203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.939228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.939353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.939378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.939508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.939533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.939686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.939712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.939835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.939860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.939988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.940014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.940197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.940222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.940350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.940376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.940495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.940520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.940650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.940677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.940831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.940856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.940977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.941002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.941139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.941165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.941291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.941322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.941473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.941498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.941638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.941664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.941796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.941822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.941948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.941973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.942127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.942153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.341 qpair failed and we were unable to recover it. 00:26:28.341 [2024-07-12 16:02:57.942271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.341 [2024-07-12 16:02:57.942297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.942432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.942458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.942588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.942614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.942748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.942774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.942899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.942925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.943085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.943112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.943293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.943341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.943499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.943525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.943663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.943689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.943845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.943871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.944011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.944036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.944202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.944228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.944365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.944391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.944524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.944549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.944673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.944699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.944834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.944861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.944986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.945013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.945163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.945189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.945346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.945395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.945526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.945553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.945717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.945743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.945878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.945904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.946059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.946085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.946241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.946267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.946407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.946433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.946560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.946586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.946716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.946743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.946891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.946916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.947065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.947091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.947222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.947247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.947371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.947397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.947526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.947552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.947682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.947708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.947865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.947891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.948017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.948043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.948177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.948202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.948346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.948373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.948506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.948531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.948685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.948711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.948840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.948866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.342 qpair failed and we were unable to recover it. 00:26:28.342 [2024-07-12 16:02:57.948998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.342 [2024-07-12 16:02:57.949023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.949161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.949188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.949342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.949370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.949500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.949526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.949688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.949714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.949865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.949891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.950018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.950050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.950179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.950205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.950359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.950385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.950533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.950558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.950711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.950737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.950873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.950899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.951021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.951047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.951205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.951232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.951362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.951390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.951541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.951568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.951704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.951731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.951884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.951911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.952070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.952096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.952252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.952279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.952422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.952449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.952582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.952607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.952773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.952799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.952922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.952947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.953113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.953138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.953292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.953325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.953450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.953475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.953605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.953631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.953762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.953788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.953949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.953975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.954100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.954127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.954281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.954307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.954435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.954461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.954619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.954644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.954776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.954802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.954954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.954980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.955110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.955136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.955303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.955348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.955486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.955512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.955636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.343 [2024-07-12 16:02:57.955661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.343 qpair failed and we were unable to recover it. 00:26:28.343 [2024-07-12 16:02:57.955797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.955823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.955951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.955979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.956132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.956158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.956285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.956311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.956452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.956479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.956607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.956633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.956782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.956812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.956962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.956988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.957114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.957140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.957327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.957354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.957470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.957495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.957619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.957646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.957806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.957833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.957972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.957999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.958127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.958154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.958279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.958305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.958451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.958477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.958621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.958647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.958780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.958806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.958966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.958992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.959121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.959147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.959300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.959333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.959494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.959520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.959655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.959680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.959807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.959833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.959984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.960010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.960140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.960166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.960294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.960326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.960457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.960482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.960606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.960631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.960768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.960794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.960953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.960979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.961138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.961163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.961286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.961312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.961457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.961482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.344 [2024-07-12 16:02:57.961638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.344 [2024-07-12 16:02:57.961664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.344 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.961799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.961825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.961976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.962002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.962135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.962161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.962298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.962329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.962466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.962492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.962648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.962674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.962827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.962853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.962977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.963002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.963134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.963160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.963341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.963368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.963534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.963565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.963700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.963726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.963865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.963892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.964019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.964046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.964198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.964224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.964387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.964413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.964567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.964594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.964736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.964762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.964899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.964926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.965064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.965090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.965252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.965278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.965416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.965443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.965567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.965594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.965726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.965752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.965895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.965921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.966041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.966067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.966203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.966229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.966395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.966422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.966550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.966576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.966708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.966734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.966881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.966906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.967057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.967083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.967237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.967263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.967387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.967414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.967576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.967602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.967763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.967789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.967951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.967978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.968145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.968171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.968288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.968322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.968456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.345 [2024-07-12 16:02:57.968481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.345 qpair failed and we were unable to recover it. 00:26:28.345 [2024-07-12 16:02:57.968604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.968629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.968753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.968778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.968933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.968960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.969085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.969110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.969245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.969270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.969410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.969438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.969591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.969617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.969744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.969769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.969939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.969965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.970104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.970130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.970258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.970283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.970439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.970466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.970595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.970620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.970757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.970783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.970937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.970963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.971089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.971116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.971248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.971274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.971428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.971455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.971614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.971640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.971765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.971792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.971947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.971972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.972120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.972145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.972271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.972301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.972448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.972475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.972598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.972625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.972744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.972770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.972903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.972929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.973062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.973088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.973236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.973262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.973404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.973432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.973553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.973579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.973712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.973737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.973876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.973902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.974051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.974077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.974207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.974231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.974376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.974402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.974527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.974552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.974705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.974734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.974888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.974912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.975030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.975055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.975205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.346 [2024-07-12 16:02:57.975229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.346 qpair failed and we were unable to recover it. 00:26:28.346 [2024-07-12 16:02:57.975379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.975405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.975538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.975562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.975686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.975711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.975862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.975887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.976037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.976062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.976191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.976216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.976379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.976413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.976541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.976566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.976717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.976753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.976882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.976908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.977036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.977062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.977190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.977216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.977363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.977389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.977536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.977561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.977686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.977712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.977837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.977864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.978001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.978027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.978147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.978172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.978319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.978344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.978492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.978518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.978640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.978666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.978819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.978845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.978994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.979020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.979166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.979205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.979350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.979385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.979513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.979538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.979695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.979721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.979849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.979876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.979993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.980019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.980177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.980205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.980349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.980375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.980513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.980538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.980677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.980704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.980842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.980868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.980999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.981024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.981155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.981182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.981321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.981352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.981485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.981511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.981641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.981667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.981799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.981825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.347 [2024-07-12 16:02:57.981958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.347 [2024-07-12 16:02:57.981985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.347 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.982142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.982168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.982305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.982339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.982469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.982495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.982623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.982648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.982783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.982810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.982961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.982987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.983109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.983134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.983278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.983304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.983454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.983480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.983609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.983634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.983784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.983810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.983939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.983964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.984102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.984127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.984261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.984286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.984429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.984456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.984593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.984619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.984748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.984776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.984929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.984955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.985097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.985126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.985253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.985280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.985424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.985449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.985577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.985602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.985772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.985798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.985921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.985947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.986095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.986121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.986275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.986300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.986444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.986469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.986604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.986629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.986754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.986780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.986905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.986931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.987064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.987092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.987220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.987247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.987411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.987437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.987568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.987594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.987727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.987751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.987889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.987921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.988077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.988103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.348 qpair failed and we were unable to recover it. 00:26:28.348 [2024-07-12 16:02:57.988229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.348 [2024-07-12 16:02:57.988254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.988492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.988518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.988672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.988696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.988849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.988874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.989001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.989027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.989160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.989185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.989312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.989349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.989483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.989509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.989647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.989672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.989817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.989842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.989987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.990013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.990138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.990164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.990325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.990351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.990485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.990510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.990647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.990673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.990822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.990847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.990977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.991002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.991141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.991166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.991285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.991311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.991454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.991480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.991639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.991665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.991823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.991849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.991978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.992003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.992143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.992168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.992293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.992325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.992511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.992537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.992667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.992693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.992814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.992839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.992971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.992998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.993151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.993176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.993303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.993335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.993481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.993507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.993663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.349 [2024-07-12 16:02:57.993702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.349 qpair failed and we were unable to recover it. 00:26:28.349 [2024-07-12 16:02:57.993854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.993881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.994031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.994057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.994188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.994215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.994349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.994382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.994532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.994557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.994717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.994750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.994884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.994910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.995089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.995114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.995240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.995265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.995446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.995472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.995600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.995625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.995755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.995780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.995910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.995937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.996094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.996119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.996252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.996277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.996433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.996460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.996617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.996642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.996764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.996789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.996964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.996990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.997125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.997151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.997309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.997341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.997469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.997495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.997683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.997709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.997837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.997863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.997996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.998021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.998159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.998185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.998327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.998370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.998511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.998538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.998662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.998688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.998838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.998863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.998981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.999006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.999139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.999164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.999307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.999341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.999476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.999501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.999655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.999681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.999814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:57.999840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:57.999975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:58.000002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:58.000145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:58.000170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:58.000298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:58.000330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:58.000456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:58.000482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:58.000636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.350 [2024-07-12 16:02:58.000662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.350 qpair failed and we were unable to recover it. 00:26:28.350 [2024-07-12 16:02:58.000789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.000814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.000950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.000975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.001105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.001132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.001259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.001284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.001433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.001464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.001601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.001626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.001777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.001803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.001962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.001988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.002119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.002144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.002269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.002294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.002437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.002463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.002589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.002614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.002757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.002783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.002940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.002965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.003093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.003119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.003268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.003294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.003436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.003462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.003622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.003648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.003806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.003832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.003981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.004006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.004132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.004157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.004287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.004313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.004462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.004488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.004617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.004643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.004798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.004823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.004972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.004998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.005131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.005156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.005279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.005305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.005452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.005478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.005619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.005644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.005797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.005822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.005962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.005988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.006108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.006134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.006299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.006341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.006496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.006530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.006687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.006713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.006836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.006862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.006994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.007020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.007143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.007171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.007337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.007365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.351 qpair failed and we were unable to recover it. 00:26:28.351 [2024-07-12 16:02:58.007511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.351 [2024-07-12 16:02:58.007537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.007663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.007689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.007830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.007856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.008004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.008030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.008169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.008210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.008352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.008380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.008520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.008546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.008700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.008729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.008892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.008918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.009041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.009067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.009201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.009226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.009359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.009385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.009514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.009549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.009690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.009722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.009886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.009914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.352 [2024-07-12 16:02:58.010042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.352 [2024-07-12 16:02:58.010070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.352 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.010198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.010225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.010364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.010392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.010529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.010556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.010681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.010707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.010833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.010859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.010986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.011012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.011153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.011190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.011364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.011399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.011555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.011587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.011736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.011773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.011904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.011931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.012088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.012115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.012254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.012280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.012420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.012447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.012578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.012605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.012732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.012759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.012899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.012925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.013084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.013109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.013234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.635 [2024-07-12 16:02:58.013260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.635 qpair failed and we were unable to recover it. 00:26:28.635 [2024-07-12 16:02:58.013399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.013425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.013562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.013589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.013721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.013747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.013885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.013910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.014044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.014082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.014217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.014246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.014403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.014430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.014562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.014589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.014719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.014744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.014896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.014926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.015054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.015080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.015205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.015231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.015377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.015403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.015527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.015552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.015710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.015735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.015905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.015930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.016050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.016075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.016197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.016223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.016351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.016376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.016515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.016541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.016682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.016708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.016834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.016859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.016993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.017018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.017153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.017178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.017302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.017335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.017477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.017502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.017632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.017660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.017817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.017842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.017974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.018000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.018138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.018164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.018294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.018327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.018464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.018489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.018630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.018656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.018817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.018850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.018985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.019010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.019168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.019194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.019332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.019359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.019490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.019515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.019644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.019670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.636 [2024-07-12 16:02:58.019805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.636 [2024-07-12 16:02:58.019830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.636 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.019967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.019992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.020137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.020163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.020302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.020333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.020490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.020515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.020644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.020671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.020795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.020820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.020952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.020978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.021104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.021129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.021267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.021293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.021465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.021653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.021692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.021838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.021866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.022003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.022029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.022157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.022183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.022323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.022350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.022486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.022512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.022635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.022661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.022789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.022814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.022933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.022959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.023111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.023136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.023270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.023296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.023438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.023464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.023590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.023615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.023750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.023775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.023927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.023953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.024091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.024121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.024263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.024289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.024425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.024452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.024580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.024606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.024735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.024762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.024916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.024942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.025065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.025092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.025256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.025281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.025440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.025465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.025589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.025615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.025792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.025818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.025953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.025980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.026119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.026145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.026275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.026300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.026432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.637 [2024-07-12 16:02:58.026458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.637 qpair failed and we were unable to recover it. 00:26:28.637 [2024-07-12 16:02:58.026590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.026616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.026740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.026767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.026922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.026948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.027094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.027120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.027235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.027261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.027395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.027421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.027563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.027592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.027719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.027747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.027894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.027921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.028049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.028081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.028236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.028263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.028395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.028422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.028549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.028574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.028693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.028718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.028845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.028871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.029001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.029027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.029157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.029183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.029311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.029343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.029466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.029492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.029621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.029646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.029771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.029798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.029945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.029971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.030114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.030140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.030287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.030333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.030468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.030496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.030629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.030655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.030776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.030802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.030930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.030955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.031073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.031099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.638 qpair failed and we were unable to recover it. 00:26:28.638 [2024-07-12 16:02:58.031221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.638 [2024-07-12 16:02:58.031247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.031396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.031436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.031566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.031594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.031734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.031760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.031890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.031917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.032046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.032072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.032219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.032245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.032381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.032409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.032564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.032591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.032711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.032736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.032885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.032910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.033033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.033059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.033219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.033245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.033387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.033416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.033573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.033600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.033740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.033767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.033903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.033929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.034052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.034079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.034210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.034236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.034385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.034413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.034541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.034571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.034716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.034741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.034874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.034900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.035029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.035055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.035179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.035205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.035335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.035362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.639 [2024-07-12 16:02:58.035484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.639 [2024-07-12 16:02:58.035509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.639 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.035633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.035659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.035776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.035802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.035945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.035969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.036104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.036130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.036250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.036276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.036411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.036438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.036586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.036612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.036748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.036774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.036897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.036923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.037061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.037087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.037209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.037235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.037383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.037410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.037547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.037572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.037697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.037723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.037862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.037889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.038008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.038034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.038194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.038220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.038355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.038381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.038509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.038535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.640 [2024-07-12 16:02:58.038658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.640 [2024-07-12 16:02:58.038684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.640 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.038823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.038862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.039000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.039027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.039148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.039173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.039299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.039334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.039456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.039482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.039608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.039633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.039766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.039791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.039921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.039948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.040079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.040105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.040241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.040269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.040408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.040434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.040557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.040583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.040715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.040740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.040873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.040898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.041020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.041045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.041179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.041205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.041332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.041359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.041483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.041509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.041634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.041661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.041792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.041818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.041950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.041977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.042111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.042138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.042263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.042289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.042442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.042468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.042592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.042618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.042734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.641 [2024-07-12 16:02:58.042760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.641 qpair failed and we were unable to recover it. 00:26:28.641 [2024-07-12 16:02:58.042880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.042905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.043030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.043057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.043181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.043208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.043352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.043391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.043522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.043548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.043686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.043714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.043841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.043867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.044022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.044048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.044176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.044202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.044339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.044365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.044491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.044518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.044668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.044694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.044813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.044839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.044975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.045000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.045126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.045156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.045295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.045328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.045456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.045482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.045616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.045643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.045766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.045792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.045941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.045968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.046096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.046122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.046257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.046286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.046430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.046459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.046588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.046614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.046738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.046763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.046919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.642 [2024-07-12 16:02:58.046945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.642 qpair failed and we were unable to recover it. 00:26:28.642 [2024-07-12 16:02:58.047069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.047093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.047218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.047245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.047384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.047413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.047552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.047582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.047707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.047736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.047865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.047890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.048033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.048061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.048196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.048222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.048350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.048378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.048521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.048548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.048689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.048714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.048868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.048894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.049055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.049081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.049207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.049232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.049369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.049396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.049524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.049554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.049698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.049723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.049884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.049910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.050058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.050084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.050205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.050231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.050361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.050389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.050521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.050546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.050667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.050693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.050826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.050851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.050986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.051011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.051140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.643 [2024-07-12 16:02:58.051166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.643 qpair failed and we were unable to recover it. 00:26:28.643 [2024-07-12 16:02:58.051294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.051327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.051460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.051487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.051616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.051642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.051826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.051851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.051986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.052012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.052159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.052185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.052328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.052355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.052482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.052508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.052633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.052658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.052784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.052811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.052956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.052981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.053105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.053131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.053268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.053294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.053431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.053457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.053589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.053614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.053750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.053776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.053902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.053928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.054070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.054095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.054222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.054247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.054399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.054425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.054563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.054589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.054714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.054739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.054871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.054897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.055061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.055087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.055215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.055242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.055384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.644 [2024-07-12 16:02:58.055410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.644 qpair failed and we were unable to recover it. 00:26:28.644 [2024-07-12 16:02:58.055536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.055562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.055689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.055715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.055844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.055870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.055994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.056026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.056151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.056179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.056311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.056347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.056484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.056510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.056634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.056661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.056790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.056816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.056942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.056968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.057096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.057121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.057274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.057299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.057456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.057496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.057661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.057689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.057833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.057860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.057992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.058018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.058150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.058178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.058334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.058361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.058488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.058514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.058643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.058669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.058818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.058844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.058967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.058991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.059119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.059147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.059310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.059346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.059466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.059492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.059614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.059640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.059766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.645 [2024-07-12 16:02:58.059791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.645 qpair failed and we were unable to recover it. 00:26:28.645 [2024-07-12 16:02:58.059925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.059951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.060104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.060131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.060254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.060279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.060434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.060474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.060620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.060647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.060776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.060802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.060963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.060989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.061123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.061149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.061304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.061335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.061463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.061488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.061624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.061650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.061776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.061802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.061954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.061980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.062104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.062130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.062260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.062286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.062421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.062447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.062580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.062611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.062766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.062792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.062915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.062940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.063064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.063089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.063219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.063245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.063377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.063405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.063545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.063585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.063725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.063754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.063890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.063916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.064045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.064071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.064205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.646 [2024-07-12 16:02:58.064230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.646 qpair failed and we were unable to recover it. 00:26:28.646 [2024-07-12 16:02:58.064381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.064407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.064559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.064585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.064705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.064730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.064880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.064907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.065040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.065066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.065216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.065242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.065366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.065392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.065536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.065564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.065684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.065709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.065839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.065864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.065984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.066009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.066143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.066168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.066305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.066336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.066473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.066498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.066623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.066648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.066770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.066795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.066930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.066960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.067089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.067114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.067262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.067287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.067439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.067464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.067586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.067611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.067733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.067758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.067892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.067917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.068042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.068068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.068197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.647 [2024-07-12 16:02:58.068222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.647 qpair failed and we were unable to recover it. 00:26:28.647 [2024-07-12 16:02:58.068399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.068426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.068560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.068585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.068745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.068770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.068920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.068945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.069066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.069091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.069246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.069285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.069436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.069466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.069594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.069620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.069779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.069805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.069953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.069979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.070100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.070125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.070266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.070292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.070442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.070481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.070632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.070659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.070783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.070809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.070944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.070970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.071114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.071139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.071266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.071291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.071429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.071464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.071615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.071640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.071778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.071805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.071928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.071955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.072096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.072121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.072259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.072285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.072433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.072473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.072610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.072637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.072766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.072791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.072935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.072963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.073118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.073144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.073264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.073289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.073433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.073460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.073579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.073605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.073752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.073777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.073913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.073938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.074068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.074095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.648 [2024-07-12 16:02:58.074219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.648 [2024-07-12 16:02:58.074245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.648 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.074386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.074412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.074542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.074569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.074719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.074745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.074900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.074926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.075048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.075074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.075224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.075250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.075407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.075433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.075571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.075596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.075732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.075759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.075920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.075945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.076071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.076096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.076255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.076281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.076442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.076488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.076642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.076670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.076813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.076839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.077000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.077026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.077150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.077176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.077308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.077342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.077493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.077519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.077643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.077669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.077791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.077816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.077937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.077963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.078086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.078118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.078249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.078275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.078421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.078449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.078592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.078618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.078769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.078794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.078924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.078950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.079070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.079095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.079226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.079251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.079391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.079418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.079550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.079576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.079698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.079724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.079886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.079915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.080062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.080087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.080218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.080244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.080375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.080401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.080538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.080563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.080688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.080713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.080832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.649 [2024-07-12 16:02:58.080858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.649 qpair failed and we were unable to recover it. 00:26:28.649 [2024-07-12 16:02:58.080997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.081023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.081153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.081181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.081347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.081374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.081499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.081525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.081658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.081685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.081809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.081834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.081982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.082008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.082152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.082187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.082353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.082380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.082509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.082536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.082675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.082701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.082839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.082871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.083016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.083043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.083181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.083207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.083334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.083369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.083505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.083531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.083670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.083696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.083830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.083856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.083982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.084007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.084133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.084158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.084323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.084350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.084472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.084497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.084621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.084650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.084779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.084806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.084955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.084981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.085108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.085134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.085276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.085322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.085456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.085484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.085639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.085665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.085815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.085841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.085971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.085996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.086119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.086144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.086296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.086327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.086481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.086507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.086626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.086652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.086808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.086834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.086966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.086992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.087158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.087184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.087353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.087392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.087530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.087557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.650 [2024-07-12 16:02:58.087689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.650 [2024-07-12 16:02:58.087715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.650 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.087843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.087869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.088024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.088050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.088180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.088206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.088330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.088356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.088506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.088532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.088663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.088689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.088817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.088843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.088959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.088984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.089126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.089152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.089283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.089309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.089449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.089476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.089629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.089655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.089782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.089807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.089924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.089950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.090105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.090131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.090251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.090277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.090414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.090440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.090567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.090593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.090719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.090746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.090876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.090902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.091038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.091065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.091225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.091256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.091380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.091407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.091544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.091570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.091741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.091767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.091921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.091947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.092084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.092110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.092246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.092272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.092446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.092472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.092609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.092635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.092771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.092796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.092933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.092959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.093091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.093117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.093241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.651 [2024-07-12 16:02:58.093267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.651 qpair failed and we were unable to recover it. 00:26:28.651 [2024-07-12 16:02:58.093403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.093430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.093598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.093623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.093759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.093786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.093921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.093947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.094103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.094128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.094250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.094276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.094406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.094434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.094558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.094584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.094725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.094751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.094878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.094904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.095029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.095055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.095219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.095245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.095376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.095403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.095532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.095557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.095702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.095729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.095862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.095888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.096019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.096045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.096171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.096198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.096326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.096352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.096479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.096505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.096640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.096665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.096816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.096841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.096981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.097006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.097157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.097182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.097329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.097356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.097510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.097536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.097695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.097721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.097846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.097875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.098002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.098028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.098191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.098230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.098376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.098404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.098531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.652 [2024-07-12 16:02:58.098557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.652 qpair failed and we were unable to recover it. 00:26:28.652 [2024-07-12 16:02:58.098689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.098715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.098844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.098870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.098991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.099016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.099149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.099175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.099302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.099344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.099473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.099500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.099634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.099660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.099820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.099846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.099973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.099999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.100128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.100153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.100298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.100333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.100470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.100496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.100660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.100685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.100838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.100863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.100999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.101025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.101152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.101178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.101310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.101345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.101469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.101495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.101660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.101685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.101813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.101839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.101992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.102019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.102150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.102176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.102302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.102339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.102474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.102502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.102659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.102684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.102818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.653 [2024-07-12 16:02:58.102845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.653 qpair failed and we were unable to recover it. 00:26:28.653 [2024-07-12 16:02:58.102968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.102994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.103170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.103195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.103330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.103362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.103489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.103515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.103680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.103706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.103866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.103892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.104050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.104076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.104209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.104234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.104414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.104439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.104567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.104598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.104726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.104751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.104889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.104915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.105068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.105094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.105226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.105251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.105392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.105418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.105577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.105602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.105735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.105760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.105887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.105913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.106042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.106067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.106233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.106259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.106380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.106407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.106537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.106562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.106699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.106724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.106860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.106886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.107027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.107052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.107189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.107215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.654 qpair failed and we were unable to recover it. 00:26:28.654 [2024-07-12 16:02:58.107351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.654 [2024-07-12 16:02:58.107379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.107541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.107578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.107719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.107744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.107899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.107926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.108070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.108096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.108228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.108254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.108387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.108415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.108548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.108584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.108703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.108729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.108883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.108908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.109042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.109068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.109196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.109222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.109355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.109387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.109519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.109546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.109678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.109703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.109827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.109852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.109979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.110006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.110142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.110167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.110297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.110332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.110472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.110499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.110633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.110658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.110782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.110808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.110950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.110975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.111100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.111130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.111266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.111293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.111457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.111483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.111639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.655 [2024-07-12 16:02:58.111665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.655 qpair failed and we were unable to recover it. 00:26:28.655 [2024-07-12 16:02:58.111820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.111847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.112026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.112052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.112175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.112200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.112332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.112360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.112489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.112514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.112676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.112701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.112834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.112860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.112995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.113020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.113147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.113173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.113330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.113358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.113494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.113520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.113648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.113674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.113797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.113823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.113981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.114008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.114137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.114162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.114325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.114352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.114483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.114508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.114662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.114687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.114811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.114837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.114970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.114995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.115127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.115153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.115312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.115344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.115496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.115521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.115713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.115752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.115894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.115921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.116042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.656 [2024-07-12 16:02:58.116068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.656 qpair failed and we were unable to recover it. 00:26:28.656 [2024-07-12 16:02:58.116193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.116218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.116376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.116403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.116534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.116562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.116693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.116719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.116836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.116861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.116997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.117022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.117150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.117177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.117326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.117353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.117506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.117533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.117653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.117679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.117799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.117830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.117989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.118015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.118172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.118197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.118353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.118379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.118510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.118536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.118664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.118690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.118841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.118867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.119000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.119026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.119174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.119199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.119337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.119362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.119539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.119564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.119691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.119717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.119850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.119875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.120002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.120027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.120153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.120179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.120324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.657 [2024-07-12 16:02:58.120351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.657 qpair failed and we were unable to recover it. 00:26:28.657 [2024-07-12 16:02:58.120480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.120506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.120629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.120655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.120787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.120813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.120971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.120996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.121136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.121161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.121296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.121330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.121484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.121509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.121656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.121681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.121807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.121833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.121998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.122024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.122180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.122207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.122363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.122390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.122518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.122543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.122670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.122697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.122829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.122854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.123008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.123034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.123158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.123185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.123347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.123374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.123527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.123553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.123677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.123702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.658 qpair failed and we were unable to recover it. 00:26:28.658 [2024-07-12 16:02:58.123829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.658 [2024-07-12 16:02:58.123855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.123980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.124005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.124137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.124162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.124320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.124347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.124473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.124503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.124638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.124664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.124789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.124816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.124946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.124972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.125099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.125125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.125250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.125276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.125414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.125440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.125575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.125600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.125725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.125751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.125904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.125930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.126053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.126078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.126231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.126257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.126384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.126411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.126543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.126568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.126711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.126737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.126859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.126885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.127021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.127047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.127183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.127344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.127371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.127500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.127525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.127651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.127676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.127799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.127825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.127957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.127983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.659 [2024-07-12 16:02:58.128107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.659 [2024-07-12 16:02:58.128133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.659 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.128248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.128273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.128415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.128441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.128589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.128614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.128747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.128772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.128897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.128922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.129059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.129084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.129206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.129231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.129352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.129378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.129523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.129548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.129706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.129731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.129897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.129922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.130042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.130068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.130189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.130214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.130355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.130381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.130510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.130535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.130690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.130716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.130863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.130893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.131012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.131038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.131169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.131196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.131324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.131352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.131486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.131512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.131666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.131691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.131823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.131848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.131978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.132004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.132123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.132148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.132318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.660 [2024-07-12 16:02:58.132344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.660 qpair failed and we were unable to recover it. 00:26:28.660 [2024-07-12 16:02:58.132492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.132518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.132640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.132666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.132793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.132819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.132988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.133015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.133169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.133195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.133364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.133390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.133520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.133545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.133690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.133715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.133845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.133871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.133994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.134020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.134150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.134175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.134298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.134331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.134466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.134491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.134617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.134642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.134761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.134786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.134937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.134963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.135084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.135109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.135228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.135254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.135374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.135400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.135532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.135557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.135672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.135697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.135825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.135850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.136005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.136030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.136154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.136180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.136320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.136346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.136496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.136522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.136670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.136696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.136815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.136840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.136991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.137016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.137154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.137180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.137303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.137345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.137480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.137506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.661 [2024-07-12 16:02:58.137632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.661 [2024-07-12 16:02:58.137658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.661 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.137780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.137806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.137941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.137967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.138100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.138125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.138243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.138269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.138419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.138445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.138582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.138608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.138747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.138772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.138899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.138925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.139056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.139081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.139231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.139257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.139378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.139404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.139545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.139571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.139701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.139729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.139880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.139906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.140034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.140059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.140213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.140239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.140362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.140389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.140540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.140566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.140745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.140771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.140896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.140922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.141050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.141075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.141198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.141223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.141379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.141405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.141550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.141575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.141697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.141726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.141844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.141870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.141993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.142019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.142145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.142170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.142404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.142430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.142554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.142580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.142730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.142755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.142902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.142928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.143083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.143108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.143236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.143261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.143389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.143415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.143540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.143566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.662 [2024-07-12 16:02:58.143705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.662 [2024-07-12 16:02:58.143731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.662 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.143864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.143890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.144018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.144044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.144191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.144218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.144345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.144371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.144526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.144551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.144684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.144709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.144846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.144871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.145002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.145027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.145147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.145173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.145297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.145341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.145463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.145489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.145629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.145655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.145811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.145836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.145996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.146021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.146186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.146212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.146342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.146368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.146508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.146534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.146660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.146686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.146821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.146848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.146967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.146993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.147139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.147165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.147324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.147350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.147473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.147499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.147632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.147657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.147784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.147811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.147965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.147991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.148121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.148147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.148266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.148296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.148438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.148464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.148580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.148605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.148735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.148760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.148924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.148949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.149116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.149142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.149276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.663 [2024-07-12 16:02:58.149302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.663 qpair failed and we were unable to recover it. 00:26:28.663 [2024-07-12 16:02:58.149445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.149471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.149626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.149651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.149818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.149844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.149998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.150023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.150151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.150177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.150312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.150344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.150463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.150489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.150636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.150662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.150813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.150839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.150993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.151019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.151141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.151167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.151326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.151352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.151499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.151524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.151657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.151682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.151833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.151858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.151984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.152009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.152157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.152182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.152342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.152368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.152491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.152517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.152677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.152703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.152861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.152886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.153066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.153092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.153229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.153255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.153379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.153414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.153546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.153571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.153739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.153765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.153886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.153912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.154035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.154061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.154219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.154245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.154392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.154419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.154593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.154618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.154740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.154767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.154900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.154926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.155047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.155077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.155200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.155227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.155361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.155387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.155516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.155541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.155669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.155694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.155822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.155847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.155973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.664 [2024-07-12 16:02:58.155999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.664 qpair failed and we were unable to recover it. 00:26:28.664 [2024-07-12 16:02:58.156139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.156165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.156347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.156374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.156542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.156568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.156698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.156724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.156845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.156870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.156999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.157024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.157143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.157168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.157329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.157355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.157506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.157532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.157683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.157709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.157859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.157885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.158011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.158037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.158173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.158199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.158325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.158351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.158480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.158505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.158636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.158662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.158791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.158816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.158969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.158995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.159126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.159152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.159281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.159307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.159439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.159465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.159619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.159644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.159807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.159832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.159956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.159982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.160109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.160134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.160264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.160290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.160488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.665 [2024-07-12 16:02:58.160514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.665 qpair failed and we were unable to recover it. 00:26:28.665 [2024-07-12 16:02:58.160639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.160664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.160798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.160825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.160958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.160984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.161144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.161169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.161322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.161348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.161470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.161496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.161663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.161694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.161832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.161858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.162024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.162049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.162174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.162200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.162353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.162380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.162549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.162576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.162729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.162755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.162886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.162911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.163037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.163062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.163203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.163229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.163390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.163416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.163551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.163578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.163710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.163735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.163870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.163895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.164024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.164051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.164172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.164197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.164360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.164385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.164537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.164563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.164690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.164717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.164846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.666 [2024-07-12 16:02:58.164873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.666 qpair failed and we were unable to recover it. 00:26:28.666 [2024-07-12 16:02:58.164991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.165017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.165141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.165167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.165299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.165330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.165492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.165518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.165653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.165678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.165817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.165843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.165997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.166022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.166157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.166182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.166334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.166360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.166510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.166536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.166657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.166683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.166805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.166830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.166982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.167008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.167132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.167158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.167291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.167322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.167464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.167489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.167611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.167637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.167780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.167806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.167956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.167982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.168108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.168134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.168262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.168292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.168434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.168460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.168580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.168607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.168761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.168787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.168912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.168938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.169062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.169087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.169240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.667 [2024-07-12 16:02:58.169266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.667 qpair failed and we were unable to recover it. 00:26:28.667 [2024-07-12 16:02:58.169393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.169419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.169583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.169609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.169750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.169776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.169902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.169928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.170070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.170095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.170251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.170276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.170408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.170434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.170588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.170614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.170742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.170768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.170886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.170913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.171057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.171083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.171201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.171227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.171363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.171389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.171522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.171547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.171679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.171705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.171827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.171853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.171998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.172024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.172177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.172202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.172333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.172359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.172494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.172521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.172651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.172676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.172826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.172852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.172972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.172998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.173173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.173199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.173321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.173347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.173471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.668 [2024-07-12 16:02:58.173497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.668 qpair failed and we were unable to recover it. 00:26:28.668 [2024-07-12 16:02:58.173644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.173670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.173812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.173838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.173960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.173986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.174104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.174130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.174256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.174282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.174416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.174442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.174583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.174609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.174745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.174776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.174928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.174954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.175096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.175122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.175248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.175273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.175411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.175437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.175567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.175593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.175718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.175744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.175874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.175901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.176022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.176049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.176206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.176232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.176364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.176391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.176518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.176550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.176688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.669 [2024-07-12 16:02:58.176715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.669 qpair failed and we were unable to recover it. 00:26:28.669 [2024-07-12 16:02:58.176849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.176875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.177038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.177063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.177185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.177211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.177360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.177386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.177515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.177540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.177671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.177696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.177849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.177875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.178024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.178050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.178199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.178226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.178393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.178419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.178553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.178580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.178715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.178741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.178870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.178896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.179054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.179079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.179212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.179240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.179379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.179406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.179569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.179595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.179756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.179782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.179908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.179933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.180067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.180093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.180213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.180238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.180372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.180398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.180531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.180557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.180708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.180734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.180861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.180886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.670 [2024-07-12 16:02:58.181010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.670 [2024-07-12 16:02:58.181036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.670 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.181158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.181183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.181308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.181342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.181500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.181526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.181644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.181669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.181828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.181853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.182004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.182029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.182152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.182177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.182302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.182332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.182462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.182487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.182608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.182635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.182754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.182780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.182948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.182973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.183124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.183150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.183280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.183306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.183442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.183468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.183598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.183625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.183770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.183796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.183916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.183942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.184075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.184101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.184227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.184253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.184382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.184409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.184528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.184554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.184706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.184731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.184847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.184873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.184992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.185018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.185146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.671 [2024-07-12 16:02:58.185172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.671 qpair failed and we were unable to recover it. 00:26:28.671 [2024-07-12 16:02:58.185302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.185333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.185483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.185509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.185649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.185676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.185826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.185852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.185992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.186018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.186163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.186189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.186350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.186376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.186545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.186571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.186695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.186721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.186839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.186865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.186989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.187015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.187148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.187174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.187300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.187334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.187502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.187528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.187661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.187687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.187868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.187897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.188027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.188053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.188176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.188202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.188353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.188380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.188509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.188534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.188664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.188689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.188815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.188841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.188978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.189005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.189126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.189151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.189279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.189305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.189444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.189470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.189604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.672 [2024-07-12 16:02:58.189629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.672 qpair failed and we were unable to recover it. 00:26:28.672 [2024-07-12 16:02:58.189781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.189807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.189939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.189965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.190099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.190125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.190247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.190272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.190412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.190439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.190594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.190620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.190746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.190773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.190920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.190946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.191073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.191098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.191223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.191250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.191376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.191402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.191560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.191586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.191712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.191738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.191868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.191893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.192042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.192067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.192239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.192265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.192400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.192427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.192548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.192574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.192699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.192724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.192853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.192878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.193005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.193031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.193212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.193238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.193366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.193391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.193517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.193543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.193672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.193698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.193850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.193875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.194009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.194034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.194180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.194206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.194347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.194380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.194510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.673 [2024-07-12 16:02:58.194535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.673 qpair failed and we were unable to recover it. 00:26:28.673 [2024-07-12 16:02:58.194662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.194687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.194837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.194863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.194994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.195020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.195147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.195174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.195298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.195328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.195462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.195488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.195611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.195636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.195800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.195825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.195952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.195978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.196096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.196122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.196276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.196301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.196440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.196467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.196636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.196662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.196787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.196813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.196957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.196982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.197116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.197141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.197260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.197286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.197451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.197478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.197604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.197631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.197764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.197790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.197941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.197967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.198124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.198150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.198268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.198294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.198433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.198459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.198590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.198616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.198795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.198834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.198974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.199002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.199129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.199154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.199304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.199340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.199476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.199502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.199634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.199659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.199790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.199815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.199947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.199974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.200130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.200156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.200290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.200323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.200475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.200501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.200622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.674 [2024-07-12 16:02:58.200647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.674 qpair failed and we were unable to recover it. 00:26:28.674 [2024-07-12 16:02:58.200765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.200790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.200917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.200947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.201084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.201110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.201244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.201269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.201441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.201467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.201603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.201629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.201761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.201785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.201907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.201932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.202067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.202093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.202228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.202253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.202375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.202401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.202530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.202556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.202686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.202711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.202846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.202872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.203022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.203047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.203188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.203214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.203345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.203371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.203532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.203558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.203689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.203715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.203870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.203896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.204026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.204052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.204178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.204204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.204333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.204358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.204523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.204548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.204700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.204726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.204856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.204882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.205041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.205066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.205200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.205226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.205364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.205391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.205519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.205544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.205672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.205698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.205853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.205878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.206006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.206032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.675 [2024-07-12 16:02:58.206189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.675 [2024-07-12 16:02:58.206216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.675 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.206373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.206399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.206540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.206565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.206691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.206718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.206854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.206880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.207011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.207036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.207166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.207191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.207327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.207353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.207505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.207534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.207664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.207691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.207820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.207846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.207981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.208006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.208148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.208174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.208303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.208338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.208472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.208498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.208622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.208648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.208772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.208797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.208932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.208958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.209098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.209137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.209270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.209297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.209439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.209465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.209588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.209615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.209753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.209779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.209904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.209930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.210051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.210077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.210212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.210238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.210369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.210397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.210531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.210557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.210730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.210756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.210888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.210914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.211030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.211056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.211178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.211204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.211365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.211392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.211529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.211558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.211695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.211721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.211858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.211885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.212036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.212062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.212209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.212233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.212367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.212394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.212522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.212549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.676 [2024-07-12 16:02:58.212725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.676 [2024-07-12 16:02:58.212751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.676 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.212902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.212928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.213104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.213130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.213259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.213285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.213440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.213466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.213592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.213617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.213747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.213773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.213902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.213928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.214074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.214104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.214240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.214266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.214393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.214419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.214587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.214613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.214748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.214774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.214930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.214955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.215111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.215140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.215267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.215293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.215429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.215454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.215595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.215621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.215748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.215772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.215893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.215918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.216097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.216123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.216260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.216286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.216436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.216463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.216592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.216617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.216755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.216780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.216915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.216940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.217065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.217091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.217225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.217251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.217415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.217441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.217575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.217602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.217728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.217754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.217906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.217931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.218054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.218079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.218212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.218237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.218362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.218388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.218527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.218555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.218683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.218709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.218837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.218862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.218994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.219020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.219148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.219174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.219299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.219330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.219481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.219506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.677 [2024-07-12 16:02:58.219636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.677 [2024-07-12 16:02:58.219662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.677 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.219809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.219834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.219964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.219989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.220109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.220135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.220273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.220299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.220426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.220452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.220582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.220607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.220733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.220759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.220887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.220912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.221045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.221069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.221202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.221228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.221377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.221402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.221534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.221560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.221679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.221704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.221829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.221855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.221979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.222005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.222141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.222167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.222309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.222355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.222486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.222513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.222646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.222673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.222810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.222836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.222990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.223015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.223142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.223169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.223294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.223328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.223455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.223481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.223631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.223657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.223786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.223812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.223949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.223975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.224105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.224130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.224273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.224298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.224442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.224468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.224590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.224616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.224755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.224782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.224941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.224971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.225101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.225128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.225251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.225277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.225426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.225453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.225574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.225599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.225729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.225755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.225911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.225936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.226079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.226105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.226258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.226285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.226418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.226445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.226595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.226620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.678 [2024-07-12 16:02:58.226752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.678 [2024-07-12 16:02:58.226778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.678 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.226897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.226922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.227078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.227103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.227243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.227269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.227397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.227423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.227570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.227596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.227744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.227769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.227901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.227928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.228062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.228088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.228217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.228243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.228393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.228419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.228574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.228599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.228726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.228751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.228885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.228912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.229039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.229064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.229236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.229263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.229413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.229439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.229571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.229596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.229722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.229748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.229868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.229894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.230027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.230052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.230173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.230198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.230346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.230372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.230500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.230526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.230672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.230697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.230818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.230844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.230981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.231008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.231168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.231207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.231368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.231397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.231519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.231550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.231683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.231708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.231827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.231852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.231982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.232008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.232162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.232188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.232307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.232337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.232466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.232492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.232619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.232646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.232787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.232812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.679 [2024-07-12 16:02:58.232945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.679 [2024-07-12 16:02:58.232971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.679 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.233126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.233152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.233274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.233299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.233439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.233467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.233589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.233613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.233758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.233783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.233920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.233946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.234074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.234100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.234219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.234245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.234404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.234431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.234576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.234601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.234733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.234759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.234891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.234916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.235042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.235070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.235198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.235225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.235361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.235388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.235534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.235561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.235681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.235707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.235845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.235871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.236009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.236035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.236170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.236195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.236329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.236354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.236493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.236518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.236638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.236662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.236795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.236821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.236977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.237004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.237135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.237161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.237295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.237328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.237459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.237487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.237617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.237642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.237783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.237810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.237967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.237999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.238154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.238180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.238308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.238341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.238478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.238505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.238636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.238661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.238805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.238830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.238957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.238983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.239135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.239162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.239291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.239323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.239463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.239490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.239614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.239639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.239758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.239782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.239931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.239957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.240096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.240122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.240260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.240286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.680 qpair failed and we were unable to recover it. 00:26:28.680 [2024-07-12 16:02:58.240424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.680 [2024-07-12 16:02:58.240451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.240591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.240617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.240737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.240763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.240911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.240937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.241072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.241097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.241260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.241286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.241428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.241466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.241615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.241643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.241772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.241799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.241916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.241942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.242065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.242091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.242218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.242245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.242391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.242420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.242554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.242581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.242721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.242746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.242900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.242926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.243058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.243085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.243238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.243264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.243416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.243443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.243571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.243597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.243744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.243769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.243892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.243918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.244051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.244077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.244205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.244230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.244385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.244412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.244537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.244568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.244723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.244749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.244905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.244930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.245074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.245101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.245228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.245254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.245403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.245430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.245556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.245582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.245715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.245740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.245862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.245888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.246036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.246062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.246188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.246214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.246365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.246392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.246539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.246566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.246688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.246713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.246854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.246880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.247009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.247035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.247162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.247188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.247345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.247371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.247499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.247527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.247675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.247702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.247866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.248026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.248053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.681 [2024-07-12 16:02:58.248186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.681 [2024-07-12 16:02:58.248212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.681 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.248360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.248387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.248526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.248551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.248684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.248711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.248862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.248889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.249015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.249040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.249165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.249191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.249344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.249384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.249538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.249565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.249688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.249714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.249850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.249876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.249999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.250026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.250159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.250185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.250327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.250356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.250479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.250505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.250636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.250663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.250813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.250840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.250961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.250986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.251132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.251161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.251300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.251334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.251486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.251512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.251633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.251658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.251807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.251833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.251964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.251989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.252115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.252142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.252321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.252348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.252480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.252506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.252638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.252665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.252816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.252842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.252976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.253002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.253136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.253162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.253329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.253368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.253512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.253541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.253668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.253695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.253851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.253877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.254022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.254048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.254167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.254192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.254322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.254349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.254487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.254513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.254642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.254667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.254796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.254823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.254952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.254977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.255110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.255136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.255275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.255301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.255433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.255460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.255589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.255615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.255730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.255755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.255909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.682 [2024-07-12 16:02:58.255935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.682 qpair failed and we were unable to recover it. 00:26:28.682 [2024-07-12 16:02:58.256059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.256084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.256220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.256245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.256373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.256399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.256531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.256557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.256708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.256734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.256868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.256894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.257028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.257054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.257191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.257217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.257337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.257363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.257501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.257527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.257656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.257686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.257815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.257841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.257962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.257988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.258122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.258148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.258280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.258305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.258454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.258479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.258605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.258631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.258781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.258806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.258931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.258956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.259109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.259135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.259258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.259283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.259412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.259439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.259590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.259616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.259753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.259779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.259936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.259962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.260096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.260121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.260246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.260272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.260403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.260429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.260582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.260607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.260728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.260753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.260874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.260899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.261024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.261050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.261182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.261209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.261375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.261401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.261529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.261555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.261911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.261940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.262119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.262145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.262290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.262332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.262464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.262489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.262620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.262645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.262765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.262791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.262948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.683 [2024-07-12 16:02:58.262975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.683 qpair failed and we were unable to recover it. 00:26:28.683 [2024-07-12 16:02:58.263104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.263130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.263280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.263305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.263497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.263523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.263661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.263701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.263859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.263898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.264040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.264067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.264199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.264225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.264381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.264408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.264540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.264571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.264733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.264759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.264898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.264926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.265058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.265084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.265218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.265244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.265386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.265412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.265570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.265596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.265757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.265783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.265939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.265964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.266087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.266112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.266238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.266264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.266420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.266447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.266571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.266597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.266725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.266751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.266904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.266930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.267058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.267084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.267215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.267241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.267363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.267391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.267515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.267541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.267674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.267700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.267842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.267867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.267982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.268008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.268148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.268174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.268326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.268366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.268506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.268536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.268697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.268724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.268864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.268890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.269023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.269049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.269182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.269207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.269347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.269375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.269501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.269527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.269651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.269677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.269805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.269831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.269963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.269990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.270125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.270152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.270269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.270295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.270431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.270457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.270591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.270626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.270746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.270772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.684 qpair failed and we were unable to recover it. 00:26:28.684 [2024-07-12 16:02:58.270918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.684 [2024-07-12 16:02:58.270944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.271065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.271096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.271220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.271246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.271378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.271405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.271536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.271563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.271723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.271750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.271878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.271904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.272031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.272057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.272193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.272218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.272348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.272376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.272516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.272542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.272674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.272700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.272826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.272852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.272987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.273013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.273139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.273165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.273323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.273350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.273504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.273530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.273656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.273683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.273810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.273835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.273962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.273988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.274155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.274195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.274333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.274361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.274489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.274516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.274640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.274666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.274800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.274825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.274981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.275007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.275134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.275160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.275276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.275301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.275461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.275487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.275609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.275634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.275776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.275924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.275949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.276082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.276107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.276256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.276282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.276426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.276452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.276589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.276620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.276770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.276795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.276947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.276973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.277126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.277151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.277276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.277301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.277469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.277495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.277641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.277670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.277818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.277843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.277966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.277992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.278119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.278145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.278267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.278293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.278429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.278455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.278591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.278619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.278750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.278776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.278894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.278920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.685 [2024-07-12 16:02:58.279045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.685 [2024-07-12 16:02:58.279071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.685 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.279200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.279226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.279355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.279381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.279504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.279532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.279666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.279692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.279833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.279858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.279981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.280008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.280132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.280158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.280291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.280322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.280447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.280473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.280629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.280655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.280780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.280805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.280958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.280983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.281103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.281128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.281253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.281278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.281419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.281447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.281581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.281607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.281738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.281764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.281902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.281931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.282060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.282087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.282223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.282250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.282378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.282404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.282542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.282568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.282690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.282716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.282837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.282863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.282988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.283014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.283139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.283166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.283297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.283329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.283462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.283488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.283611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.283636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.283805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.283831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.283958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.283988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.284113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.284140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.284258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.284284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.284419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.284446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.284581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.284606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.284724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.284750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.284888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.284913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.285037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.285062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.285185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.285211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.285341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.285368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.285526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.285552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.285676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.285701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.285837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.285863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.285984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.286009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.286169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.286195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.286321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.286348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.286477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.286504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.686 [2024-07-12 16:02:58.286628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.686 [2024-07-12 16:02:58.286654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.686 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.286772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.286798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.286941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.286967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.287099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.287124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.287256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.287282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.287418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.287445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.287613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.287641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.287789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.287815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.287938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.287964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.288081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.288107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.288234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.288260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.288388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.288414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.288565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.288591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.288733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.288758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.288882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.288908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.289059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.289084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.289245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.289271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.289403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.289430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.289561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.289586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.289733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.289759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.289882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.289909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.290034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.290059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.290194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.290220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.290349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.290382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.290499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.290524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.290673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.290698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.290820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.290845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.290981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.291006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.291136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.291162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.291289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.291320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.291449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.291475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.291600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.291626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.291756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.291781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.687 [2024-07-12 16:02:58.291906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.687 [2024-07-12 16:02:58.291932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.687 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.292058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.292084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.292219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.292245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.292376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.292402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.292543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.292570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.292699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.292725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.292848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.292874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.293026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.293052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.293190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.293216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.293347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.293374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.293496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.293522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.293643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.293668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.293809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.293835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.293969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.293994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.294119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.294145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.294301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.294333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.294459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.294485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.294627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.294665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.294797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.294824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.294948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.294978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.295116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.295141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.295295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.295343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.295490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.295517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.295647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.295672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.295801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.295827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.295955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.295981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.296110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.296138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.296265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.296293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.296431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.296458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.296584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.296609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.296741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.296767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.296895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.296923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.297078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.297103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.297243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.297270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.297411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.297437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.297587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.297614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.297739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.297765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.297893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.297920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.298051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.298077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.298203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.298229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.298360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.688 [2024-07-12 16:02:58.298386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.688 qpair failed and we were unable to recover it. 00:26:28.688 [2024-07-12 16:02:58.298512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.298537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.298670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.298696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.298824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.298849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.298995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.299021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.299156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.299182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.299307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.299342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.299469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.299494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.299625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.299652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.299789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.299815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.299949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.299975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.300099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.300126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.300256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.300282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.300429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.300457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.300578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.300605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.300735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.300761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.300903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.300929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.301060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.301091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.301216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.301243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.301399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.301426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.301569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.301595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.301720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.301745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.301898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.301923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.302054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.302080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.302208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.302233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.302419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.302457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.302599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.302628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.302768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.302796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.302923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.302950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.303079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.303105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.303232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.303258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.303387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.303414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.303560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.303585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.303710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.303735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.303869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.303897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.304022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.304048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.304173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.304199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.304330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.304360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.304490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.304516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.304637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.304663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.304792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.689 [2024-07-12 16:02:58.304819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.689 qpair failed and we were unable to recover it. 00:26:28.689 [2024-07-12 16:02:58.304947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.304972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.305101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.305127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.305251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.305277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.305429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.305468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.305603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.305630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.305756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.305782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.305931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.305956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.306094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.306121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.306251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.306277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.306411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.306438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.306567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.306593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.306740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.306765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.306911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.306936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.307064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.307090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.307209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.307235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.307359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.307385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.307511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.307541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.307664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.307690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.307841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.307867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.308006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.308032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.308159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.308188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.308330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.308369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.308525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.308552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.308700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.308726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.308878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.308903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.309028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.309054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.309174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.309199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.309367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.309407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.309548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.309577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.309718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.309745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.309870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.309897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.310059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.310085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.310211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.310237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.310400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.310427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.310561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.310587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.310724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.310749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.310872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.310897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.311021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.311047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.311167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.311192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.311343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.311371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.690 qpair failed and we were unable to recover it. 00:26:28.690 [2024-07-12 16:02:58.311500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.690 [2024-07-12 16:02:58.311525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.311648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.311674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.311804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.311829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.311975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.312005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.312135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.312161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.312282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.312307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.312476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.312501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.312623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.312648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.312802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.312827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.312957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.312984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.313165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.313190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.313319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.313344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.313464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.313490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.313617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.313642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.313758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.313783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.313917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.313943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.314077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.314116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.314266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.314305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.314439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.314467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.314601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.314627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.314786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.314812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.314939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.314965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.315119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.315145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.315269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.315295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.315438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.315468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.315603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.315629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.315760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.315786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.315914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.315940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.316075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.316102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.316240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.316266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.316408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.316439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.316575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.316602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.316761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.316787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.316911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.316936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.691 [2024-07-12 16:02:58.317067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.691 [2024-07-12 16:02:58.317093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.691 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.317239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.317264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.317395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.317422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.317550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.317576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.317700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.317725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.317855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.317880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.318018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.318047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.318171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.318197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.318333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.318360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.318512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.318538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.318674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.318701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.318823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.318849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.319002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.319029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.319178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.319205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.319325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.319352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.319500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.319525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.319661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.319686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.319805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.319831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.319958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.319983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.320102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.320127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.320252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.320278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.320412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.320438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.320588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.320613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.320748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.320777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.320900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.320925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.321078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.321103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.321233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.321258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.321393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.321418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.321557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.321582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.321705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.321730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.321861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.321886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.322039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.322064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.322215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.322240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.322366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.322392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.322532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.322571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.322715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.322742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.322875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.322901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.323028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.323054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.323176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.323203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.323355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.323381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.323520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.692 [2024-07-12 16:02:58.323545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.692 qpair failed and we were unable to recover it. 00:26:28.692 [2024-07-12 16:02:58.323671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.323696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.323843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.323869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.324022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.324048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.324198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.324224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.324365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.324405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.324539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.324566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.324715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.324741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.324868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.324895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.325049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.325075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.325213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.325244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.325385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.325411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.325537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.325564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.325697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.325723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.325853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.325879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.326007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.326034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.326185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.326211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.326374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.326413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.326547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.326574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.326703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.326729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.326852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.326877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.327027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.327052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.327185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.327210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.327333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.327359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.327518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.327543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.327694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.327720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.327848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.327873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.327996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.328021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.328179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.328205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.328332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.328357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.328481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.328506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.328656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.328681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.328813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.328838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.328991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.329017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.329150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.329175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.329390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.329417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.329550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.329575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.329724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.329754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.329901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.329926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.330057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.330082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.330232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.330257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.330405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.330432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.693 qpair failed and we were unable to recover it. 00:26:28.693 [2024-07-12 16:02:58.330571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.693 [2024-07-12 16:02:58.330597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.330749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.330775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.330900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.330925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.331099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.331124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.331250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.331275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.331462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.331502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.331644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.331682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.331816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.331844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.332003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.332030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.332163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.332189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.332321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.332348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.332480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.332507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.332658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.332684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.332813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.332840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.332969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.332997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.333172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.333198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.333322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.333348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.333476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.333502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.333636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.333662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.333789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.333814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.333962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.333988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.334122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.334147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.334302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.334339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.334463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.334488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.334633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.334658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.334783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.334809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.334963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.334989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.335122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.335148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.335274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.335299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.335449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.335474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.335597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.335622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.335743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.335768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.335892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.335917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.336068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.336093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.336224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.336249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.336381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.336407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.336564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.336589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.336737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.336763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.336934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.336960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.337137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.337162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.337300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.337331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.694 qpair failed and we were unable to recover it. 00:26:28.694 [2024-07-12 16:02:58.337457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.694 [2024-07-12 16:02:58.337483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-12 16:02:58.337619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-12 16:02:58.337645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-12 16:02:58.337766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.695 [2024-07-12 16:02:58.337791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.695 qpair failed and we were unable to recover it. 00:26:28.695 [2024-07-12 16:02:58.337947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.337972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.338100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.338126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.338255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.338281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.338426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.338465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.338620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.338648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.338774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.338806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.338942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.338967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.339093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.339119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.339253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.339280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.339424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.339450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.339597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.339623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.339755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.339781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.339931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.339956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.340109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.340134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.340257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.340284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.340415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.340443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.340580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.340608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.340735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.340762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.340888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.340914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.341057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.341083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.341214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.341241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.341391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.972 [2024-07-12 16:02:58.341417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.972 qpair failed and we were unable to recover it. 00:26:28.972 [2024-07-12 16:02:58.341546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.341572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.341700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.341727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.341848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.341874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.341992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.342017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.342147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.342173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.342332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.342359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.342483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.342509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.342630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.342656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.342804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.342829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.342964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.342990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.343142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.343167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.343293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.343325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.343450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.343476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.343596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.343622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.343753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.343780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.343905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.343932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.344062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.344088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.344213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.344239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.344374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.344414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.344577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.344605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.344751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.344778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.344917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.344943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.345102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.345128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.345261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.345292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.345424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.345452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.345577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.345603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.345726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.345753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.345884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.345909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.346034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.346062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.346204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.346230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.346358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.346385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.346518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.346543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.346670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.346696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.346823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.346849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.346976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.347003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.347155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.347181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.347311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.347343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.347484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.347510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.347658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.347684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.347833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.347858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.973 qpair failed and we were unable to recover it. 00:26:28.973 [2024-07-12 16:02:58.347999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.973 [2024-07-12 16:02:58.348024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.348172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.348198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.348348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.348374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.348519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.348545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.348667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.348693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.348822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.348848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.348967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.348993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.349117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.349142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.349293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.349323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.349453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.349478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.349615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.349653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.349791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.349818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.349955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.349981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.350135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.350160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.350310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.350341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.350475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.350500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.350619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.350644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.350775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.350800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.350927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.350952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.351075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.351101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.351238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.351263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.351409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.351435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.351565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.351590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.351717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.351743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.351913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.351939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.352068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.352093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.352244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.352269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.352396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.352423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.352572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.352598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.352732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.352757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.352892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.352917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.353047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.353072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.353221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.353246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.353380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.353406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.353532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.353557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.353701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.353726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.353879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.353904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.354054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.354084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.354261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.354287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.354457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.354484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.354611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.974 [2024-07-12 16:02:58.354636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.974 qpair failed and we were unable to recover it. 00:26:28.974 [2024-07-12 16:02:58.354762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.354787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.354917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.354942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.355115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.355140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.355272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.355297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.355422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.355446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.355589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.355614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.355764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.355791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.355921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.355946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.356093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.356118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.356252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.356292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.356440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.356468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.356634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.356673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.356808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.356836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.356969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.356995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.357141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.357166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.357298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.357330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.357464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.357490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.357622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.357649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.357808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.357835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.357958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.357984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.358125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.358151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.358272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.358298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.358444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.358470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.358598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.358630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.358753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.358780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.358901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.358927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.359053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.359079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.359204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.359229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.359388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.359414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.359541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.359567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.359713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.359739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.359864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.359889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.360007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.360032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.360153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.360178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.360302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.360342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.360474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.360499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.360621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.360647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.360787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.360813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.360959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.360985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.361119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.361145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.361274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.975 [2024-07-12 16:02:58.361301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.975 qpair failed and we were unable to recover it. 00:26:28.975 [2024-07-12 16:02:58.361445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.361473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.361626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.361652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.361774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.361800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.361925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.361952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.362079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.362107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.362244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.362271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.362409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.362437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.362563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.362589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.362717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.362743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.362883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.362909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.363040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.363066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.363192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.363219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.363350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.363376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.363513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.363538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.363664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.363690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.363810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.363836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.363962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.363988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.364125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.364152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.364305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.364337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.364468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.364494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.364632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.364657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.364778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.364804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.364937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.364969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.365121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.365148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.365296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.365329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.365465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.365491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.365616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.365642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.365771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.365797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.365926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.365952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.366076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.366102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.366236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.366262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.366423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.366451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.366592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.366629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.976 [2024-07-12 16:02:58.366769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.976 [2024-07-12 16:02:58.366795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.976 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.366921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.366948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.367078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.367104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.367233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.367259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.367399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.367426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.367561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.367587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.367708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.367733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.367879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.367904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.368025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.368051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.368203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.368230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.368370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.368409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.368550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.368577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.368705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.368732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.368858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.368883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.369003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.369028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.369180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.369220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.369354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.369381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.369505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.369532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.369654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.369680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.369810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.369836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.369982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.370008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.370138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.370164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.370292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.370334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.370499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.370525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.370662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.370687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.370841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.370868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.370991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.371019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.371152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.371178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.371324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.371364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.371524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.371557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.371696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.371722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.371852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.371878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.372008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.372033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.372183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.372209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.372344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.372371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.372500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.372526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.372684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.372709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.372839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.372866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.372994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.373020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.373154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.373180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.373339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.977 [2024-07-12 16:02:58.373366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.977 qpair failed and we were unable to recover it. 00:26:28.977 [2024-07-12 16:02:58.373518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.373544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.373676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.373703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.373835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.373861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.373986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.374013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.374155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.374181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.374306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.374337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.374465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.374491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.374626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.374652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.374834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.374860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.374986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.375012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.375155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.375182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.375331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.375358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.375488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.375514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.375636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.375662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.375790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.375816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.375948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.375974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.376094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.376120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.376259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.376285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.376419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.376446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.376587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.376636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.376799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.376826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.376974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.376999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.377124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.377149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.377270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.377296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.377460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.377487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.377643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.377668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.377793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.377818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.377946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.377971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.378105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.378136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.378304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.378336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.378476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.378503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.378637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.378663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.378788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.378814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.378964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.378990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.379118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.379144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.379298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.379336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.379487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.379513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.379673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.379701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.379828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.379855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.380004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.380030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.978 [2024-07-12 16:02:58.380166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.978 [2024-07-12 16:02:58.380194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.978 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.380381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.380424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.380598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.380631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.380757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.380783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.380931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.380957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.381090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.381116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.381257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.381296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.381433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.381460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.381593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.381620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.381756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.381782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.381919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.381947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.382080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.382107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.382242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.382270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.382404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.382431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.382559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.382586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.382713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.382744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.382881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.382908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.383033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.383059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.383181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.383208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.383371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.383398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.383522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.383548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.383695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.383721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.383842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.383868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.383997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.384023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.384151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.384179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.384347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.384387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.384520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.384548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.384712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.384738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.384870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.384896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.385037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.385065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.385220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.385247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.385381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.385420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.385567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.385593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.385723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.385750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.385884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.385910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.386038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.386064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.386192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.386220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.386380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.386420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.386587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.386615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.386745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.386772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.386898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.386925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.979 [2024-07-12 16:02:58.387058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.979 [2024-07-12 16:02:58.387084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe0c000b90 with addr=10.0.0.2, port=4420 00:26:28.979 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.387232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.387271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.387419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.387459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.387611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.387638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.387762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.387788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.387919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.387946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.388079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.388105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.388268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.388294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.388489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.388527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.388675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.388702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.388828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.388854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.388985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.389011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.389139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.389165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.389290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.389336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.389484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.389509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.389641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.389666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.389792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.389818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.389981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.390007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.390136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.390162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.390287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.390313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.390475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.390501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.390630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.390655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.390775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.390800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.390966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.390991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.391126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.391152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.391281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.391306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.391435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.391461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.391593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.391628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.391763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.391789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.391915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.391940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.392070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.392096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.392242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.392267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.392410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.392436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.392571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.392598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.392730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.392755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.392879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.392904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.393035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.393060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.393182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.393209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.393377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.393404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.393526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.393551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.980 qpair failed and we were unable to recover it. 00:26:28.980 [2024-07-12 16:02:58.393676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.980 [2024-07-12 16:02:58.393701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.393826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.393852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.394010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.394035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.394160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.394185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.394312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.394346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.394471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.394497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.394622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.394648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.394796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.394822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.394952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.394978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.395108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.395134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.395266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.395306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.395475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.395502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.395623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.395650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.395767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.395793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.395943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.395968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.396127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.396153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.396310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.396350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.396484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.396510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.396668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.396694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.396821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.396847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.396994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.397020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.397184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.397210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.397362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.397402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.397561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.397588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.397725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.397751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.397904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.397929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.398050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.398087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.398229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.398255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.398406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.398435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.398579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.398606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.398734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.398760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.398940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.398967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.399093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.399119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.399242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.399269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.399404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.399431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.399574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.399599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.399760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.981 [2024-07-12 16:02:58.399786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.981 qpair failed and we were unable to recover it. 00:26:28.981 [2024-07-12 16:02:58.399913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.399939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.400075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.400100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.400245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.400271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.400416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.400454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.400592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.400626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.400759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.400786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.400908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.400934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.401051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.401076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.401215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.401241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.401405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.401432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.401550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.401575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.401730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.401755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.401884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.401910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.402080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.402105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.402243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.402269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.402405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.402432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.402553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.402579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.402752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.402777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.402911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.402937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.403087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.403112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.403236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.403262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.403398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.403424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.403572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.403598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.403731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.403756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.403874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.403900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.404028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.404054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.404183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.404209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.404340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.404366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.404485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.404510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.404648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.404673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.404830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.404858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.405008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.405034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.405189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.405215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.405340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.405367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.405489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.405515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.405666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.405691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.405815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.405841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.405966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.405992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.406111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.406137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.406261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.406287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.982 [2024-07-12 16:02:58.406425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.982 [2024-07-12 16:02:58.406451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.982 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.406582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.406608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.406762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.406788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.406905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.406930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.407053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.407078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.407221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.407246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.407388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.407415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.407569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.407596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.407749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.407774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.407893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.407919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.408098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.408124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.408273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.408299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.408458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.408484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.408610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.408635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.408797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.408823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.408963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.408988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.409123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.409148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.409268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.409293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.409421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.409447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.409604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.409636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.409770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.409795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.409919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.409945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.410061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.410086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.410263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.410288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.410439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.410464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.410587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.410612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.410738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.410764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.410890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.410916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.411043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.411068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.411201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.411241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.411419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.411447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.411573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.411599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.411763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.411789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.411945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.411980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.412109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.412135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.412257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.412282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.412423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.412451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.412584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.412610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.412740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.412766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.412891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.412917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.413034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.983 [2024-07-12 16:02:58.413059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.983 qpair failed and we were unable to recover it. 00:26:28.983 [2024-07-12 16:02:58.413213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.413252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.413395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.413422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.413544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.413570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.413701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.413727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.413878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.413904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.414031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.414057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.414181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.414206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.414332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.414358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.414513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.414538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.414687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.414713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.414841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.414867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.415012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.415038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.415184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.415212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.415344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.415370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.415510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.415536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.415696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.415723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.415843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.415868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.416006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.416031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.416170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.416196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.416328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.416360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.416493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.416519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.416679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.416704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.416829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.416855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.416996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.417022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.417176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.417201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.417356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.417385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.417524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.417550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.417681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.417709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.417831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.417857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.418021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.418046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.418167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.418195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.418322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.418349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.418479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.418506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.418632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.418658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.418787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.418814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.418949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.418975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.419114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.419140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.419292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.419336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.419471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.419497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.419622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.419648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.419779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.984 [2024-07-12 16:02:58.419805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.984 qpair failed and we were unable to recover it. 00:26:28.984 [2024-07-12 16:02:58.419958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.419983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.420132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.420158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.420280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.420307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.420466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.420493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.420673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.420703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.420831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.420858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.420986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.421012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.421150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.421176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.421307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.421358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.421492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.421518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.421670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.421696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.421850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.421876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.422027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.422052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.422173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.422199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.422335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.422362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.422495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.422522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.422657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.422691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.422823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.422850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.422979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.423005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.423156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.423182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.423333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.423359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.423487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.423512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.423640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.423667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.423797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.423823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.423972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.423999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.424124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.424149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.424281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.424308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.424445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.424472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.424632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.424658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.424789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.424824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.424967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.424994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.425118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.425144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.425305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.425343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.425480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.425507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.425671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.985 [2024-07-12 16:02:58.425698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.985 qpair failed and we were unable to recover it. 00:26:28.985 [2024-07-12 16:02:58.425823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.425857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.426012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.426038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.426172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.426199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.426329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.426358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.426489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.426516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.426674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.426701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.426852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.426878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.427005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.427032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.427152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.427178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.427307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.427345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.427496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.427522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.427646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.427672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.427792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.427818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.427962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.427987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.428105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.428130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.428252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.428278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.428424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.428450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.428600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.428625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.428758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.428784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.428907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.428934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.429064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.429090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.429243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.429268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.429403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.429431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.429568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.429594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.429757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.429782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.429908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.429935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.430071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.430097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.430226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.430252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.430416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.430442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.430569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.430596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.430746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.430772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.430890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.430915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.431067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.431093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.431251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.431277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.431406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.431433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.431554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.431580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.431733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.431760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.431907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.431933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.432053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.432082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.432247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.432272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.432415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.986 [2024-07-12 16:02:58.432442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.986 qpair failed and we were unable to recover it. 00:26:28.986 [2024-07-12 16:02:58.432565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.432591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.432749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.432775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.432902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.432929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.433068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.433094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.433218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.433244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.433386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.433413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.433537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.433563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.433715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.433741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.433917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.433953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.434112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.434138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.434266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.434291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.434433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.434460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.434608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.434633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.434769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.434795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.434920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.434946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.435072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.435099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.435250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.435277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.435417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.435443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.435581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.435615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.435737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.435763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.435900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.435925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.436052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.436078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.436233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.436259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.436406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.436432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.436555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.436580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.436705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.436731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.436865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.436893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.437049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.437076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.437233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.437260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.437389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.437417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.437545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.437572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.437711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.437737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.437860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.437886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.438032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.438058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.438188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.438215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.438346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.438373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.438548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.438574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.438705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.438731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.438861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.438888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.439064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.439089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.987 [2024-07-12 16:02:58.439212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.987 [2024-07-12 16:02:58.439238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.987 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.439402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.439429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.439554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.439580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.439726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.439752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.439874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.439900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.440030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.440057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.440195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.440222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.440347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.440373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.440495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.440525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.440654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.440681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.440803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.440830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.440956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.440982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.441109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.441135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.441265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.441291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.441428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.441455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.441612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.441639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.441796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.441822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.441976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.442002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.442125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.442150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.442277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.442303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.442446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.442474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.442604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.442637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.442794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.442822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.442957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.442982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.443113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.443139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.443262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.443288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.443443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.443470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.443597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.443623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.443746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.443772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.443919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.443944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.444107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.444132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.444266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.444292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.444426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.444454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.444582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.444608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.444763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.444789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.444913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.444939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.445060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.445087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.445213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.445238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.445365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.445392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.445570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.445596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.445727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.445753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.988 qpair failed and we were unable to recover it. 00:26:28.988 [2024-07-12 16:02:58.445906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.988 [2024-07-12 16:02:58.445932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.446060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.446086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.446209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.446236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.446372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.446400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.446534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.446560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.446703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.446730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.446882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.446909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.447061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.447092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.447223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.447249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.447385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.447411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.447572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.447598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.447723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.447749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.447873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.447899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.448024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.448051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.448209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.448235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.448375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.448401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.448525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.448551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.448698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.448725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.448885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.448911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.449045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.449071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.449201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.449226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.449375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.449402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.449553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.449579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.449709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.449735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.449861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.449887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.450041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.450067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.450216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.450243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.450400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.450432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.450569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.450595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.450718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.450744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.450897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.450922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.451044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.451069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.451202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.451228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.451392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.451419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.451554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.451580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.451711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.989 [2024-07-12 16:02:58.451737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.989 qpair failed and we were unable to recover it. 00:26:28.989 [2024-07-12 16:02:58.451886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.451912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.452028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.452054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.452172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.452197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.452350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.452377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.452528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.452554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.452692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.452718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.452846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.452873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.452996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.453021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.453184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.453210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.453334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.453361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.453487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.453513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.453645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.453674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.453793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.453819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.453952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.453979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.454139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.454165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.454302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.454347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.454473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.454499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.454657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.454683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.454815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.454841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.454990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.455016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.455134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.455160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.455311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.455355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.455483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.455510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.455672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.455698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.455823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.455849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.455999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.456026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.456160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.456186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.456326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.456352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.456501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.456528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.456685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.456710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.456865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.456891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.457026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.457052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.457207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.457232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.457393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.457420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.457572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.457598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.457749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.457775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.457899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.457925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.458048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.458074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.458222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.458248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.458388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.458417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.990 qpair failed and we were unable to recover it. 00:26:28.990 [2024-07-12 16:02:58.458565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.990 [2024-07-12 16:02:58.458592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.458713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.458739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.458864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.458891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.459012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.459038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.459214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.459240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.459372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.459399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.459578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.459604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.459735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.459762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.459887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.459913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.460060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.460086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.460212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.460238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.460391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.460421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.460570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.460596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.460725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.460750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.460878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.460904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.461025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.461051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.461209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.461235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.461367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.461394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.461523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.461549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.461717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.461743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.461878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.461904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.462035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.462062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.462217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.462244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.462401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.462429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.462566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.462593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.462727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.462752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.462876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.462902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.463030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.463056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.463184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.463211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.463344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.463371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.463497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.463523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.463657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.463683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.463808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.463833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.463985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.464011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.464128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.464154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.464276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.464311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.464469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.464496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.464620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.464647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.464802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.464830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.464982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.465009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.465138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.991 [2024-07-12 16:02:58.465164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.991 qpair failed and we were unable to recover it. 00:26:28.991 [2024-07-12 16:02:58.465292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.465334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.465467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.465494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.465633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.465659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.465798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.465824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.465969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.465995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.466156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.466182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.466339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.466366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.466531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.466557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.466686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.466711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.466831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.466856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.466984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.467014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.467150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.467176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.467301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.467338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.467498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.467523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.467658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.467684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.467834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.467860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.467986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.468012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.468136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.468162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.468278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.468303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.468440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.468466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.468602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.468627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.468762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.468787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.468909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.468934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.469061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.469087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.469220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.469245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.469378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.469405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.469536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.469562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.469712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.469737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.469866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.469892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.470022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.470047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.470170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.470196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.470333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.470364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.470490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.470517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.470644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.470670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.470801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.470827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.470945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.470971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.471101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.471127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.471258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.471284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.471424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.471450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.471579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.471605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.992 [2024-07-12 16:02:58.471743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.992 [2024-07-12 16:02:58.471770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.992 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.471890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.471916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.472056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.472082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.472233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.472258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.472396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.472422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.472549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.472575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.472733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.472760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.472914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.472940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.473085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.473111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.473240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.473265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.473395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.473422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.473593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.473619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.473744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.473770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.473894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.473920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.474059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.474085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.474214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.474240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.474392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.474418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.474549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.474575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.474707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.474732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.474874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.474900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.475058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.475085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.475234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.475261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.475397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.475424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.475551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.475576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.475738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.475771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.475900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.475927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.476050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.476076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.476233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.476259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.476391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.476418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.476541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.476567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.476717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.476746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.476890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.476916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.477046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.477071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.477209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.477235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.477379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.993 [2024-07-12 16:02:58.477405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.993 qpair failed and we were unable to recover it. 00:26:28.993 [2024-07-12 16:02:58.477531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.477557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.477692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.477718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.477861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.477890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.478046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.478071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.478206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.478231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.478371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.478398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.478545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.478571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.478702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.478728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.478860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.478886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.479030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.479055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.479191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.479216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.479347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.479374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.479513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.479539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.479673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.479698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.479822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.479847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.480020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.480046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.480210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.480235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.480367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.480398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.480518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.480544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.480686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.480711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.480842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.480868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.480986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.481012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.481149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.481175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.481320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.481346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.481480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.481506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.481633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.481659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.481790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.481815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.481943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.481968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.482090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.482115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.482267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.482293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.482444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.482484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.482608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.482635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.482761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.482788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.482941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.482968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.483133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.483159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.483288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.483322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.483468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.483495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.483622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.483648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.483769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.483795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.483958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.483984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.994 [2024-07-12 16:02:58.484105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.994 [2024-07-12 16:02:58.484131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.994 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.484264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.484290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.484430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.484461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.484592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.484619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.484740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.484766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.484898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.484924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.485058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.485084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.485204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.485230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.485375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.485401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.485532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.485559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.485703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.485728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.485858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.485883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.486014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.486041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.486170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.486195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.486339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.486365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.486487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.486512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.486640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.486666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.486790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.486816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.486935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.486960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.487083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.487108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.487289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.487335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.487472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.487500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.487633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.487660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.487787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.487813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.487971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.487997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.488157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.488183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.488330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.488357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.488486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.488513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.488636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.488662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.488794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.488819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.488958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.488985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.489121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.489147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.489296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.489328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.489465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.489492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.489623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.489649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.489823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.489849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.489980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.490006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.490137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.490163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.490296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.490330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.490462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.490489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.490611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.995 [2024-07-12 16:02:58.490637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.995 qpair failed and we were unable to recover it. 00:26:28.995 [2024-07-12 16:02:58.490764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.490791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.490930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.490961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.491094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.491119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.491269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.491308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.491474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.491502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.491636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.491662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.491817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.491843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.491968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.491993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.492110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.492137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.492265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.492291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.492459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.492486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.492622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.492648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:28.996 [2024-07-12 16:02:58.492783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.492810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:28.996 [2024-07-12 16:02:58.492938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.492965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:28.996 [2024-07-12 16:02:58.493097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.493124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:28.996 [2024-07-12 16:02:58.493246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.493273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.996 [2024-07-12 16:02:58.493407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.493434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.493568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.493594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.493716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.493742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.493913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.493939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.494062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.494088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.494229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.494275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.494430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.494458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.494582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.494613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.494756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.494785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.494925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.494951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.495087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.495114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.495241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.495269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.495431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.495459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.495586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.495611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.495766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.495792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.495913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.495939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.496066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.496091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.496230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.496255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.496397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.496425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.496556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.496583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.496720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.496746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.496873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.496899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.497024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.996 [2024-07-12 16:02:58.497050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.996 qpair failed and we were unable to recover it. 00:26:28.996 [2024-07-12 16:02:58.497184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.497215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.497382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.497408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.497527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.497553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.497691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.497716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.497838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.497864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.497986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.498011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.498131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.498157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.498298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.498332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.498468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.498494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.498664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.498691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.498822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.498849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.498986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.499015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.499195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.499221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.499356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.499383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.499531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.499557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.499732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.499759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.499890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.499915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.500072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.500104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.500228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.500254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.500382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.500409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.500550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.500576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.500708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.500735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.500888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.500913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.501054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.501080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.501200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.501225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.501368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.501395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.501531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.501556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.501694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.501721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.501847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.501873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.502005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.502032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.502152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.502178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.502330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.502356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.502482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.502507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.502648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.502674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.502799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.502825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.502953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.502978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.503117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.503143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.503276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.503303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.503450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.503476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.503604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.503630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.503757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.997 [2024-07-12 16:02:58.503789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.997 qpair failed and we were unable to recover it. 00:26:28.997 [2024-07-12 16:02:58.503924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.503950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.504116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.504142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.504277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.504302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.504461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.504488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.504611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.504647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.504778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.504803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.504931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.504957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.505091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.505118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.505239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.505265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.505409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.505435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.505564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.505590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.505726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.505752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.505877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.505902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.506065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.506091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.506220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.506247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.506382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.506409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.506546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.506572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.506710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.506736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.506871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.506904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.507056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.507082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.507213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.507239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.507380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.507407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.507562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.507587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.507722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.507748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.507878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.507904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.508076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.508102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.508234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.508261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.508415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.508443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.508568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.508594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.508730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.508756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.508880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.508905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.509037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.509063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.509186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.509213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.509346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.509374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.509501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.509527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.998 [2024-07-12 16:02:58.509670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.998 [2024-07-12 16:02:58.509696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.998 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.509816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.509843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.509961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.509986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.510129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.510155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.999 [2024-07-12 16:02:58.510287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.510330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.510462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.510491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:28.999 [2024-07-12 16:02:58.510631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.510657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.999 [2024-07-12 16:02:58.510808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.510835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.999 [2024-07-12 16:02:58.510961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.510987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.511126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.511151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.511278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.511304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.511457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.511483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.511608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.511642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.511785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.511811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.511943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.511969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.512103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.512129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.512266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.512291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.512417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.512443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.512579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.512604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.512751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.512777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.512902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.512928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.513051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.513076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.513239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.513265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.513391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.513418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.513552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.513577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.513715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.513742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.513888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.513914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.514047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.514073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.514204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.514229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.514375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.514405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.514551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.514577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.514703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.514729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.514854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.514880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.515001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.515026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.515153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.515181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.515310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.515342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.515473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.515499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.515627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.515654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.515782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.515808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.515940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.999 [2024-07-12 16:02:58.515965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:28.999 qpair failed and we were unable to recover it. 00:26:28.999 [2024-07-12 16:02:58.516091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.516116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.516236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.516262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.516395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.516421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.516560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.516586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.516723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.516749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.516875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.516901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.517027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.517054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.517203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.517229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.517356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.517384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.517511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.517538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.517658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.517684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.517808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.517833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.517976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.518002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.518152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.518177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.518300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.518332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.518461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.518486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.518625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.518651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.518781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.518806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.518956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.518982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.519112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.519138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.519267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.519292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.519461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.519512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.519687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.519716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.519850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.519876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.520009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.520045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.520199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.520235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.520408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.520444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.520584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.520611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.520739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.520773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.520920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.520962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.521164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.521200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.521409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.521446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.521622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.521659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.521807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.521833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.521987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.522013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.522157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.522183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.522341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.522380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.522519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.522546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.522694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.522720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.522840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.000 [2024-07-12 16:02:58.522866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.000 qpair failed and we were unable to recover it. 00:26:29.000 [2024-07-12 16:02:58.522987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.523012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.523133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.523159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.523287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.523321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.523478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.523504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.523630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.523656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.523805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.523831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.523994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.524020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.524164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.524189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.524348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.524374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.524504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.524530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.524655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.524681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.524809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.524835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.524961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.524988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.525172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.525197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.525327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.525359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.525483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.525509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.525652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.525682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.525813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.525838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.525962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.525987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.526127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.526152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.526277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.526310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.526444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.526470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.526619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.526645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.526781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.526807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.526934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.526961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.527101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.527126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.527250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.527276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.527420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.527446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.527575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.527602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.527744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.527771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.527992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.528018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.528142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.528167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.528293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.528345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.528473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.528499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.528643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.528668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.528795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.528820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.528978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.529004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.529138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.529164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.529285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.529330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.529478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.001 [2024-07-12 16:02:58.529503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.001 qpair failed and we were unable to recover it. 00:26:29.001 [2024-07-12 16:02:58.529631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.529656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.529798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.529824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.529975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.530001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.530138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.530164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.530324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.530351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.530488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.530515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.530682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.530708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.530868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.530896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.531020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.531046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.531188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.531215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.531343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.531370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.531496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.531523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.531679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.531704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.531828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.531853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.531976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.532001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.532135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.532161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.532288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.532327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.532461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.532487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.532610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.532636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.532771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.532796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.532920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.532947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.533078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.533103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.533226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.533252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.533391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.533418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.533539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.533565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.533712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.533738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.533890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.533916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.534034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.534059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.534197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.534222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.534352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.534379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.534526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.534552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.534719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.534744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.534888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.534913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.535040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.535066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.535203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.535228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.002 [2024-07-12 16:02:58.535364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.002 [2024-07-12 16:02:58.535391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.002 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.535527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.535554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 Malloc0 00:26:29.003 [2024-07-12 16:02:58.535721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.535747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.535878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.535903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.536035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.536067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.003 [2024-07-12 16:02:58.536197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.536223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:29.003 [2024-07-12 16:02:58.536357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.536384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.003 [2024-07-12 16:02:58.536531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.536558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.536716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.536743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.536888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.536914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.537059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.537084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.537234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.537260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.537417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.537444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.537569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.537596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.537727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.537753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.537880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.537907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.538029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.538055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.538208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.538234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.538390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.538416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.538540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.538565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.538707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.538732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.538863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.538888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.539016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.539042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.539168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.539194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.539328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.539354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.539415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.003 [2024-07-12 16:02:58.539477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.539502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.539620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.539646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.539778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.539804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.539926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.539952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.540080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.540105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.540270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.540297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.540443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.540468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.540605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.540630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.540775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.540800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.540926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.540952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.541075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.541100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.541250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.541275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.541431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.541458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.541589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.003 [2024-07-12 16:02:58.541622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.003 qpair failed and we were unable to recover it. 00:26:29.003 [2024-07-12 16:02:58.541748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.541774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.541902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.541928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.542055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.542080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.542205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.542231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.542381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.542407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.542532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.542557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.542692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.542718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.542848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.542874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.543009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.543034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.543159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.543185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.543321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.543348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.543498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.543523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.543687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.543713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.543865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.543891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.544029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.544055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.544183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.544210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.544340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.544367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.544509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.544535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.544694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.544720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.544908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.544934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.545089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.545119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.545273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.545298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.545444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.545472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.545597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.545623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.545776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.545801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.545935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.545961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.546094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.546119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.546267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.546293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.546455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.546496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.546637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.546664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.546790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.546816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.546951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.546978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.547158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.547184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.547346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.547373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.547518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.547545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.004 [2024-07-12 16:02:58.547710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.547736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.004 [2024-07-12 16:02:58.547887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.547913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.004 [2024-07-12 16:02:58.548072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.004 [2024-07-12 16:02:58.548100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.004 qpair failed and we were unable to recover it. 00:26:29.004 [2024-07-12 16:02:58.548267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.548293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.548434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.548462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.548595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.548621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.548774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.548800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.548960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.548987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.549111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.549137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.549275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.549304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.549440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.549471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.549607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.549634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.549761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.549787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.549925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.549951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.550107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.550134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.550255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.550281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.550445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.550471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.550598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.550625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.550758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.550784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.550904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.550930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.551082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.551108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.551244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.551283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.551422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.551449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.551590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.551616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.551751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.551776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.551904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.551931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.552057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.552082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.552235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.552261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.552387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.552413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.552538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.552563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.552687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.552713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.552850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.552876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.552999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.553025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.553151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.553180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.553322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.553348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.553487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.553513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.553636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.553662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.553804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.553831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.553979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.554005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.554158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.554183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.554333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.554360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.554491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.554518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.554644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.554669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.554802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.005 [2024-07-12 16:02:58.554829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.005 qpair failed and we were unable to recover it. 00:26:29.005 [2024-07-12 16:02:58.554956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.554982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe04000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.555125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.555153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.555279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.555304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.555447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.555473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.006 [2024-07-12 16:02:58.555599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.555625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:29.006 [2024-07-12 16:02:58.555754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.555785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.006 [2024-07-12 16:02:58.555950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.006 [2024-07-12 16:02:58.555976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.556116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.556142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.556293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.556325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.556460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.556485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.556611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.556637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.556757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.556783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.556900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.556926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.557089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.557114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.557235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.557260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.557399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.557426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.557556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.557582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.557716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.557742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.557875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.557900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.558047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.558073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.558198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.558224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.558361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.558387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.558516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.558542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.558675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.558702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.558856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.558882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.559018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.559044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.559180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.559205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.559357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.559383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.559514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.559540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.559663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.559688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.559810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.559835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.559962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.559993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.560150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.560175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.560306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.006 [2024-07-12 16:02:58.560337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.006 qpair failed and we were unable to recover it. 00:26:29.006 [2024-07-12 16:02:58.560487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.560512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.560655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.560680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.560858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.560883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.561011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.561036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.561188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.561214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.561345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.561373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.561493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.561519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.561650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.561676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.561817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.561844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.561975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.562001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.562157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.562183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.562312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.562343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.562474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.562500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.562625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.562651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.562800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.562825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.562964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.562989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.563139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.563165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.563294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.563336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.563467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.563493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.007 [2024-07-12 16:02:58.563638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.563664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.007 [2024-07-12 16:02:58.563788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.563815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.007 [2024-07-12 16:02:58.563952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.563978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.007 [2024-07-12 16:02:58.564100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.564126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.564264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.564289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.564434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.564460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.564584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.564610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.564755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.564781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.564938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.564964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.565111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.565137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.565258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.565283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.565419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.565446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.565571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.565597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.565721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.565747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.565872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.565898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.566032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.566058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.566181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.566211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.566333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.566359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.566516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.566542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.566672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.007 [2024-07-12 16:02:58.566698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.007 qpair failed and we were unable to recover it. 00:26:29.007 [2024-07-12 16:02:58.566820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-12 16:02:58.566845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.566970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-12 16:02:58.566997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.567120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-12 16:02:58.567146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.567287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-12 16:02:58.567312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7efe14000b90 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.567499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.008 [2024-07-12 16:02:58.567539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c5b3f0 with addr=10.0.0.2, port=4420 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.567620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.008 [2024-07-12 16:02:58.570150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.570301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.570339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.570358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.570372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.570406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.008 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:29.008 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.008 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:29.008 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.008 16:02:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 133776 00:26:29.008 [2024-07-12 16:02:58.580047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.580184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.580212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.580227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.580240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.580269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.590028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.590164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.590190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.590205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.590218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.590248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.599975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.600139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.600165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.600180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.600193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.600222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.610009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.610148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.610174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.610188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.610204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.610235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.620072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.620205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.620231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.620246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.620259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.620287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.630044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.630166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.630192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.630207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.630220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.630248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.640105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.640236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.640262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.640276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.640289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.640326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.650085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.650215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.650240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.650255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.650267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.650296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.660107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.660243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.660268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.660283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.660304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.660342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.670160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.670291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.670327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.670346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.670360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.670390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.008 [2024-07-12 16:02:58.680165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.008 [2024-07-12 16:02:58.680300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.008 [2024-07-12 16:02:58.680336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.008 [2024-07-12 16:02:58.680352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.008 [2024-07-12 16:02:58.680366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.008 [2024-07-12 16:02:58.680395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.008 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.690299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.690433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.690459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.690474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.690487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.690516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.700280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.700433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.700459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.700474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.700488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.700517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.710289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.710428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.710455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.710470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.710483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.710511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.720310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.720489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.720515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.720530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.720543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.720572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.730361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.730499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.730527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.730545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.730559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.730588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.740404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.740581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.740607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.740622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.740635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.740663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.750400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.750532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.750557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.750578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.750592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.750621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.760526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.760675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.760701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.760717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.760733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.760761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.770540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.770671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.770696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.770711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.770724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.770752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.780598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.780748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.780773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.780788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.780801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.780829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.790568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.790699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.790724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.790739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.790752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.790780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.800545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.800709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.800734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.800749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.800763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.800791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.810579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.810754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.810780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.810794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.810807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.810836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.820591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.820714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.820740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.820755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.820768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.268 [2024-07-12 16:02:58.820796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.268 qpair failed and we were unable to recover it. 00:26:29.268 [2024-07-12 16:02:58.830652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.268 [2024-07-12 16:02:58.830781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.268 [2024-07-12 16:02:58.830807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.268 [2024-07-12 16:02:58.830822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.268 [2024-07-12 16:02:58.830838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.830866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.840619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.840753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.840778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.840799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.840812] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.840842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.850684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.850830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.850854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.850869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.850882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.850910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.860756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.860917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.860942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.860957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.860970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.860998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.870744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.870869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.870894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.870909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.870922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.870950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.880776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.880944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.880969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.880984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.880998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.881026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.890806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.890943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.890970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.890991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.891006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.891035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.900818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.900944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.900969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.900984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.900998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.901026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.910879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.911011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.911038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.911056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.911070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.911098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.920861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.920990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.921016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.921031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.921044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.921072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.930916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.931054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.931080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.931101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.931115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.931151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.940915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.941038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.941064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.941079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.941092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.941120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.950939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.951062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.951087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.951102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.951115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.951143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.961015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.961160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.961185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.961200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.961213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.961241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.269 [2024-07-12 16:02:58.971080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.269 [2024-07-12 16:02:58.971213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.269 [2024-07-12 16:02:58.971239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.269 [2024-07-12 16:02:58.971253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.269 [2024-07-12 16:02:58.971265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.269 [2024-07-12 16:02:58.971293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.269 qpair failed and we were unable to recover it. 00:26:29.270 [2024-07-12 16:02:58.981025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.270 [2024-07-12 16:02:58.981176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.270 [2024-07-12 16:02:58.981201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.270 [2024-07-12 16:02:58.981216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.270 [2024-07-12 16:02:58.981229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.270 [2024-07-12 16:02:58.981257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.270 qpair failed and we were unable to recover it. 00:26:29.270 [2024-07-12 16:02:58.991087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.270 [2024-07-12 16:02:58.991213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.270 [2024-07-12 16:02:58.991239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.270 [2024-07-12 16:02:58.991254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.270 [2024-07-12 16:02:58.991267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.270 [2024-07-12 16:02:58.991295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.270 qpair failed and we were unable to recover it. 00:26:29.529 [2024-07-12 16:02:59.001089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.529 [2024-07-12 16:02:59.001248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.529 [2024-07-12 16:02:59.001273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.529 [2024-07-12 16:02:59.001288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.529 [2024-07-12 16:02:59.001301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.529 [2024-07-12 16:02:59.001337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.529 qpair failed and we were unable to recover it. 00:26:29.529 [2024-07-12 16:02:59.011118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.529 [2024-07-12 16:02:59.011248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.529 [2024-07-12 16:02:59.011273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.011288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.011301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.011337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.021152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.021288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.021325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.021344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.021357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.021386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.031198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.031330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.031355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.031369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.031383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.031411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.041238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.041382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.041407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.041422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.041435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.041464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.051236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.051380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.051405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.051420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.051434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.051462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.061293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.061476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.061502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.061517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.061530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.061564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.071284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.071435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.071459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.071474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.071487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.071515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.081337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.081510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.081535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.081550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.081563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.081591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.091346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.091481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.091506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.091521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.091534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.091562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.101373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.101506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.101533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.101547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.101560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.101589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.111397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.111526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.111557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.111573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.111585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.111613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.121454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.121590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.121615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.121630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.121643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.121671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.131465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.131612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.131640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.131656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.131669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.131698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.141533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.141665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.141691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.141706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.530 [2024-07-12 16:02:59.141719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.530 [2024-07-12 16:02:59.141748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.530 qpair failed and we were unable to recover it. 00:26:29.530 [2024-07-12 16:02:59.151522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.530 [2024-07-12 16:02:59.151652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.530 [2024-07-12 16:02:59.151677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.530 [2024-07-12 16:02:59.151692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.151705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.151739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.161573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.161707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.161732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.161746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.161760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.161787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.171566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.171702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.171727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.171742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.171754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.171782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.181649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.181802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.181827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.181842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.181855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.181883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.191639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.191780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.191805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.191819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.191833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.191861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.201674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.201816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.201846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.201862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.201875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.201904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.211665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.211798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.211823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.211838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.211850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.211878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.221718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.221886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.221912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.221926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.221940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.221968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.231767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.231912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.231937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.231951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.231965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.231994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.241792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.241927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.241951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.241966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.241979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.242014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.531 [2024-07-12 16:02:59.251785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.531 [2024-07-12 16:02:59.251910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.531 [2024-07-12 16:02:59.251935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.531 [2024-07-12 16:02:59.251950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.531 [2024-07-12 16:02:59.251963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.531 [2024-07-12 16:02:59.251991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.531 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.261885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.262027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.262052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.262067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.262080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.791 [2024-07-12 16:02:59.262109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.271874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.272018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.272043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.272058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.272071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.791 [2024-07-12 16:02:59.272101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.281888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.282028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.282053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.282068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.282081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.791 [2024-07-12 16:02:59.282109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.291958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.292091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.292120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.292136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.292149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.791 [2024-07-12 16:02:59.292178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.301922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.302050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.302076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.302090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.302104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.791 [2024-07-12 16:02:59.302132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.311949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.312109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.312134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.312148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.312162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.791 [2024-07-12 16:02:59.312190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.322002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.322136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.322161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.322176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.322189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.791 [2024-07-12 16:02:59.322217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-07-12 16:02:59.332007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.791 [2024-07-12 16:02:59.332134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.791 [2024-07-12 16:02:59.332158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.791 [2024-07-12 16:02:59.332173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.791 [2024-07-12 16:02:59.332194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.332223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.342031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.342152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.342178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.342193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.342206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.342234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.352108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.352238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.352264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.352278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.352291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.352328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.362132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.362262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.362287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.362302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.362321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.362352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.372150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.372283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.372307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.372330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.372344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.372373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.382164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.382289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.382322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.382340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.382354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.382384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.392215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.392358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.392385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.392400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.392413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.392442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.402247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.402406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.402432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.402446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.402459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.402488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.412290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.412459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.412485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.412499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.412513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.412542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.422276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.422407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.422434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.422449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.422467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.422496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.432292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.432428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.432454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.432468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.432481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.432510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.442368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.442527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.442552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.442567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.442580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.442608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.452366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.452506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.452532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.452547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.452560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.452588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.462428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.462572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.462598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.462616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.462630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.462658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-07-12 16:02:59.472434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.792 [2024-07-12 16:02:59.472575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.792 [2024-07-12 16:02:59.472601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.792 [2024-07-12 16:02:59.472616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.792 [2024-07-12 16:02:59.472629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.792 [2024-07-12 16:02:59.472656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.793 [2024-07-12 16:02:59.482472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.793 [2024-07-12 16:02:59.482607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.793 [2024-07-12 16:02:59.482633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.793 [2024-07-12 16:02:59.482647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.793 [2024-07-12 16:02:59.482661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.793 [2024-07-12 16:02:59.482689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-07-12 16:02:59.492486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.793 [2024-07-12 16:02:59.492625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.793 [2024-07-12 16:02:59.492650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.793 [2024-07-12 16:02:59.492665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.793 [2024-07-12 16:02:59.492678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.793 [2024-07-12 16:02:59.492707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-07-12 16:02:59.502502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.793 [2024-07-12 16:02:59.502643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.793 [2024-07-12 16:02:59.502668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.793 [2024-07-12 16:02:59.502683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.793 [2024-07-12 16:02:59.502696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.793 [2024-07-12 16:02:59.502725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-07-12 16:02:59.512526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:29.793 [2024-07-12 16:02:59.512676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:29.793 [2024-07-12 16:02:59.512702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:29.793 [2024-07-12 16:02:59.512723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:29.793 [2024-07-12 16:02:59.512738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:29.793 [2024-07-12 16:02:59.512768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.793 qpair failed and we were unable to recover it. 00:26:30.051 [2024-07-12 16:02:59.522598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.051 [2024-07-12 16:02:59.522735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.051 [2024-07-12 16:02:59.522760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.522776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.522789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.522818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.532577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.532711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.532736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.532751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.532765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.532793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.542654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.542788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.542814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.542829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.542842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.542870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.552655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.552781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.552807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.552822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.552835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.552863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.562674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.562820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.562845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.562860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.562873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.562901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.572673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.572797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.572821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.572835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.572847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.572875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.582755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.582883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.582909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.582923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.582937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.582965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.592739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.592863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.592888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.592904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.592917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.592945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.602763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.602911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.602937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.602959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.602973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.603002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.612787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.612919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.612945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.612960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.612973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.613001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.622859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.622986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.623012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.623027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.623040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.623068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.632842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.632968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.632993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.633008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.633021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.633050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.642871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.643003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.643027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.643042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.643055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.643083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.652950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.653079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.653104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.653119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.653132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.653160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.052 qpair failed and we were unable to recover it. 00:26:30.052 [2024-07-12 16:02:59.662953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.052 [2024-07-12 16:02:59.663087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.052 [2024-07-12 16:02:59.663112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.052 [2024-07-12 16:02:59.663127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.052 [2024-07-12 16:02:59.663140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.052 [2024-07-12 16:02:59.663168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.672973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.673102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.673129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.673143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.673156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.673184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.683011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.683142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.683167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.683182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.683195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.683224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.693033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.693162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.693188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.693210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.693224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.693252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.703076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.703207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.703233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.703248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.703260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.703288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.713080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.713215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.713240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.713255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.713268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.713296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.723127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.723272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.723297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.723312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.723334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.723365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.733173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.733303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.733338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.733353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.733367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.733395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.743188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.743362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.743388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.743403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.743416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.743445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.753182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.753311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.753343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.753358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.753371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.753400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.763260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.763398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.763424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.763439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.763452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.763480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.053 [2024-07-12 16:02:59.773256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.053 [2024-07-12 16:02:59.773390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.053 [2024-07-12 16:02:59.773415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.053 [2024-07-12 16:02:59.773430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.053 [2024-07-12 16:02:59.773443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.053 [2024-07-12 16:02:59.773472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.053 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.783268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.783401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.783432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.783448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.783461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.783489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.793339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.793469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.793494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.793509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.793522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.793550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.803353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.803487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.803513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.803528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.803541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.803569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.813363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.813486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.813511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.813526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.813539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.813568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.823385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.823504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.823529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.823544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.823557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.823587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.833477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.833604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.833630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.833645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.833658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.833686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.843509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.843648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.843673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.843688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.843701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.843729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.853502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.853654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.853680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.853694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.853707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.853735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.863504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.312 [2024-07-12 16:02:59.863633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.312 [2024-07-12 16:02:59.863658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.312 [2024-07-12 16:02:59.863673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.312 [2024-07-12 16:02:59.863686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.312 [2024-07-12 16:02:59.863714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.312 qpair failed and we were unable to recover it. 00:26:30.312 [2024-07-12 16:02:59.873547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.873678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.873708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.873723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.873736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.873764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.883578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.883712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.883736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.883751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.883764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.883792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.893627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.893749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.893775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.893790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.893803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.893831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.903601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.903729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.903754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.903769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.903782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.903810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.913619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.913763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.913788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.913803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.913816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.913849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.923667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.923839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.923864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.923878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.923890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.923919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.933696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.933830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.933855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.933869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.933883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.933911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.943705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.943832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.943857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.943874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.943888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.943916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.953728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.953871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.953896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.953911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.953924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.953953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.963833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.963964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.963995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.964015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.964028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.964057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.973795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.973938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.973964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.973979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.973992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.974020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.983853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.983982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.984007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.984021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.984035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.984063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:02:59.993875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:02:59.994018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:02:59.994044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:02:59.994059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:02:59.994072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:02:59.994101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:03:00.004028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:03:00.004204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:03:00.004232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.313 [2024-07-12 16:03:00.004249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.313 [2024-07-12 16:03:00.004263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.313 [2024-07-12 16:03:00.004300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.313 qpair failed and we were unable to recover it. 00:26:30.313 [2024-07-12 16:03:00.013994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.313 [2024-07-12 16:03:00.014160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.313 [2024-07-12 16:03:00.014187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.314 [2024-07-12 16:03:00.014202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.314 [2024-07-12 16:03:00.014216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.314 [2024-07-12 16:03:00.014246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.314 qpair failed and we were unable to recover it. 00:26:30.314 [2024-07-12 16:03:00.023990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.314 [2024-07-12 16:03:00.024165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.314 [2024-07-12 16:03:00.024191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.314 [2024-07-12 16:03:00.024206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.314 [2024-07-12 16:03:00.024219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.314 [2024-07-12 16:03:00.024248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.314 qpair failed and we were unable to recover it. 00:26:30.314 [2024-07-12 16:03:00.033999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.314 [2024-07-12 16:03:00.034127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.314 [2024-07-12 16:03:00.034154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.314 [2024-07-12 16:03:00.034169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.314 [2024-07-12 16:03:00.034182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.314 [2024-07-12 16:03:00.034210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.314 qpair failed and we were unable to recover it. 00:26:30.572 [2024-07-12 16:03:00.044034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.572 [2024-07-12 16:03:00.044188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.572 [2024-07-12 16:03:00.044214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.044228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.044243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.044271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.054051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.054183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.054218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.054234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.054248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.054277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.064084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.064216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.064245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.064261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.064275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.064312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.074099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.074224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.074250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.074265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.074279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.074324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.084130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.084271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.084297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.084312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.084334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.084364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.094149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.094290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.094326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.094344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.094368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.094397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.104210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.104386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.104411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.104426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.104445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.104473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.114203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.114361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.114387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.114402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.114415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.114444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.124240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.124392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.124418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.124434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.124448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.124477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.134282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.134430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.134457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.134472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.134486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.573 [2024-07-12 16:03:00.134515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.573 qpair failed and we were unable to recover it. 00:26:30.573 [2024-07-12 16:03:00.144295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.573 [2024-07-12 16:03:00.144448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.573 [2024-07-12 16:03:00.144473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.573 [2024-07-12 16:03:00.144488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.573 [2024-07-12 16:03:00.144501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.144530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.154344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.154498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.154525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.154540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.154552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.154580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.164389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.164550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.164576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.164591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.164604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.164633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.174443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.174578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.174603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.174630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.174643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.174671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.184429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.184561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.184587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.184601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.184624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.184653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.194444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.194572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.194598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.194613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.194630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.194658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.204519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.204660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.204685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.204700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.204713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.204752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.214517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.214647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.214673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.214688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.214701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.214729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.224515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.224645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.224669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.224684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.224697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.224725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.234552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.234686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.234711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.234726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.234739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.234767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.244610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.244746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.244771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.574 [2024-07-12 16:03:00.244786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.574 [2024-07-12 16:03:00.244799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.574 [2024-07-12 16:03:00.244827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.574 qpair failed and we were unable to recover it. 00:26:30.574 [2024-07-12 16:03:00.254611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.574 [2024-07-12 16:03:00.254741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.574 [2024-07-12 16:03:00.254766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.575 [2024-07-12 16:03:00.254781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.575 [2024-07-12 16:03:00.254794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.575 [2024-07-12 16:03:00.254822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.575 qpair failed and we were unable to recover it. 00:26:30.575 [2024-07-12 16:03:00.264653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.575 [2024-07-12 16:03:00.264826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.575 [2024-07-12 16:03:00.264851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.575 [2024-07-12 16:03:00.264866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.575 [2024-07-12 16:03:00.264880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.575 [2024-07-12 16:03:00.264908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.575 qpair failed and we were unable to recover it. 00:26:30.575 [2024-07-12 16:03:00.274682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.575 [2024-07-12 16:03:00.274805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.575 [2024-07-12 16:03:00.274831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.575 [2024-07-12 16:03:00.274846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.575 [2024-07-12 16:03:00.274865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.575 [2024-07-12 16:03:00.274894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.575 qpair failed and we were unable to recover it. 00:26:30.575 [2024-07-12 16:03:00.284693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.575 [2024-07-12 16:03:00.284821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.575 [2024-07-12 16:03:00.284847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.575 [2024-07-12 16:03:00.284862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.575 [2024-07-12 16:03:00.284875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.575 [2024-07-12 16:03:00.284904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.575 qpair failed and we were unable to recover it. 00:26:30.575 [2024-07-12 16:03:00.294719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.575 [2024-07-12 16:03:00.294894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.575 [2024-07-12 16:03:00.294920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.575 [2024-07-12 16:03:00.294935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.575 [2024-07-12 16:03:00.294948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.575 [2024-07-12 16:03:00.294976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.575 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.304804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.304952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.304979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.304994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.305010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.305039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.314760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.314891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.314917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.314931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.314945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.314974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.324813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.324942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.324968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.324982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.324996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.325024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.334873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.334999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.335025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.335040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.335053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.335082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.344885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.345016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.345042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.345057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.345070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.345098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.354878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.355003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.355029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.355044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.355057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.355086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.364940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.365070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.365096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.365117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.365131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.365161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.375001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.375170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.375196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.375211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.375224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.834 [2024-07-12 16:03:00.375253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.834 qpair failed and we were unable to recover it. 00:26:30.834 [2024-07-12 16:03:00.384988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.834 [2024-07-12 16:03:00.385122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.834 [2024-07-12 16:03:00.385147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.834 [2024-07-12 16:03:00.385162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.834 [2024-07-12 16:03:00.385175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.385203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.394974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.395099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.395125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.395140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.395153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.395183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.405026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.405176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.405202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.405217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.405230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.405259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.415068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.415196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.415222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.415236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.415249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.415278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.425101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.425227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.425253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.425267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.425281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.425309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.435195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.435325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.435351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.435366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.435379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.435408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.445140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.445332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.445358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.445373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.445386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.445414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.455214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.455357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.455384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.455409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.455424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.455456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.465243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.465386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.465411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.465427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.465440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.465469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.475252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.475421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.475448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.475463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.475477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.475506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.485282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.485425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.485450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.485464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.485477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.485505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.495300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.495440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.495467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.495482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.495495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.495523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.505347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.505475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.505500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.505514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.505528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.505556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.515368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.515502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.515527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.515544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.515558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.515587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.525464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.835 [2024-07-12 16:03:00.525622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.835 [2024-07-12 16:03:00.525647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.835 [2024-07-12 16:03:00.525662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.835 [2024-07-12 16:03:00.525675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.835 [2024-07-12 16:03:00.525704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.835 qpair failed and we were unable to recover it. 00:26:30.835 [2024-07-12 16:03:00.535428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.836 [2024-07-12 16:03:00.535589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.836 [2024-07-12 16:03:00.535617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.836 [2024-07-12 16:03:00.535633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.836 [2024-07-12 16:03:00.535647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.836 [2024-07-12 16:03:00.535676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.836 qpair failed and we were unable to recover it. 00:26:30.836 [2024-07-12 16:03:00.545473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.836 [2024-07-12 16:03:00.545648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.836 [2024-07-12 16:03:00.545679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.836 [2024-07-12 16:03:00.545696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.836 [2024-07-12 16:03:00.545709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.836 [2024-07-12 16:03:00.545737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.836 qpair failed and we were unable to recover it. 00:26:30.836 [2024-07-12 16:03:00.555493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:30.836 [2024-07-12 16:03:00.555660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:30.836 [2024-07-12 16:03:00.555686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:30.836 [2024-07-12 16:03:00.555700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:30.836 [2024-07-12 16:03:00.555714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:30.836 [2024-07-12 16:03:00.555742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:30.836 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.565519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.565684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.565709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.565724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.565737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.565765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.575575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.575709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.575733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.575747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.575760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.575787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.585555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.585694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.585720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.585735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.585748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.585776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.595570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.595698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.595724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.595738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.595752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.595781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.605641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.605830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.605858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.605874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.605888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.605917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.615668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.615849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.615876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.615899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.615913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.615942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.625649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.625780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.625806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.625821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.625834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.625863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.635678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.635831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.635862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.635877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.635890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.635918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.645718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.645851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.645877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.645891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.645904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.645933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.655750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.655881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.655907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.655921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.655934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.096 [2024-07-12 16:03:00.655962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-07-12 16:03:00.665777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.096 [2024-07-12 16:03:00.665911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.096 [2024-07-12 16:03:00.665936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.096 [2024-07-12 16:03:00.665951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.096 [2024-07-12 16:03:00.665963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.665992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.675828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.675981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.676007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.676022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.676036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.676070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.685868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.686002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.686027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.686042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.686066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.686095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.695893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.696031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.696057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.696077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.696090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.696119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.705871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.706006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.706032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.706047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.706061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.706090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.715887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.716012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.716037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.716052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.716066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.716095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.725993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.726128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.726159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.726174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.726188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.726217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.735956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.736089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.736114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.736129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.736142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.736171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.746003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.746168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.746193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.746209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.746222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.746250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.756028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.756155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.756181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.756196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.756209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.756237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.766071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.766204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.766230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.766246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.766259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.766293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.776115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.776245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.776270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.776285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.776306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.776345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.786095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.786251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.786277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.786292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.786306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.786341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.796126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.796260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.796285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.796300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.796314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.796352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-07-12 16:03:00.806182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.097 [2024-07-12 16:03:00.806323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.097 [2024-07-12 16:03:00.806349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.097 [2024-07-12 16:03:00.806364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.097 [2024-07-12 16:03:00.806377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.097 [2024-07-12 16:03:00.806405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.098 [2024-07-12 16:03:00.816218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.098 [2024-07-12 16:03:00.816354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.098 [2024-07-12 16:03:00.816389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.098 [2024-07-12 16:03:00.816405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.098 [2024-07-12 16:03:00.816417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.098 [2024-07-12 16:03:00.816446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.356 [2024-07-12 16:03:00.826244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.356 [2024-07-12 16:03:00.826393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.356 [2024-07-12 16:03:00.826420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.356 [2024-07-12 16:03:00.826435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.356 [2024-07-12 16:03:00.826448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.356 [2024-07-12 16:03:00.826477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.356 qpair failed and we were unable to recover it. 00:26:31.356 [2024-07-12 16:03:00.836265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.356 [2024-07-12 16:03:00.836396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.356 [2024-07-12 16:03:00.836422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.356 [2024-07-12 16:03:00.836437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.356 [2024-07-12 16:03:00.836451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.836480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.846270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.846424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.846449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.846464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.846477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.846508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.856370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.856501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.856526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.856542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.856563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.856592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.866349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.866475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.866500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.866515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.866528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.866556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.876367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.876504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.876529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.876544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.876557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.876585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.886402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.886543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.886568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.886583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.886597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.886625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.896466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.896624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.896650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.896664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.896677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.896708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.906486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.906678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.906704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.906719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.906732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.906760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.916475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.916620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.916646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.916661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.916674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.916702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.926539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.926676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.926701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.926716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.926729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.926757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.936546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.936719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.936745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.936760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.936774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.936802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.946550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.946680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.946705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.946720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.946739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.946768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.956580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.956714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.956740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.956755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.956768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.956797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.966644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.966780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.966805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.966820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.966834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.357 [2024-07-12 16:03:00.966862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.357 qpair failed and we were unable to recover it. 00:26:31.357 [2024-07-12 16:03:00.976671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.357 [2024-07-12 16:03:00.976802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.357 [2024-07-12 16:03:00.976828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.357 [2024-07-12 16:03:00.976843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.357 [2024-07-12 16:03:00.976855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:00.976883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:00.986670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:00.986801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:00.986826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:00.986841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:00.986854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:00.986882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:00.996695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:00.996825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:00.996850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:00.996864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:00.996877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:00.996905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.006747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.006899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.006924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.006939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.006952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.006980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.016753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.016885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.016911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.016926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.016939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.016967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.026779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.026908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.026933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.026948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.026962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.026990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.036841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.036968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.036994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.037009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.037028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.037057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.046892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.047030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.047055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.047070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.047084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.047112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.056876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.057008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.057034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.057049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.057063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.057091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.066930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.067058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.067083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.067098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.067111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.067139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.358 [2024-07-12 16:03:01.076937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.358 [2024-07-12 16:03:01.077065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.358 [2024-07-12 16:03:01.077090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.358 [2024-07-12 16:03:01.077105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.358 [2024-07-12 16:03:01.077118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.358 [2024-07-12 16:03:01.077145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.358 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.086981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.087168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.087193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.087208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.087221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.087249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.096965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.097093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.097119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.097134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.097147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.097176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.107003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.107135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.107161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.107176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.107189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.107217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.117083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.117210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.117235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.117250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.117264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.117292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.127066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.127196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.127221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.127242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.127257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.127285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.137123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.137280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.137306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.137332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.137347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.137376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.147127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.147254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.147279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.147294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.147307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.147346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.619 [2024-07-12 16:03:01.157122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.619 [2024-07-12 16:03:01.157248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.619 [2024-07-12 16:03:01.157273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.619 [2024-07-12 16:03:01.157288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.619 [2024-07-12 16:03:01.157301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.619 [2024-07-12 16:03:01.157336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.619 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.167190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.167324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.167349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.167364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.167377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.167405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.177201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.177369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.177395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.177410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.177423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.177451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.187219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.187347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.187379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.187394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.187407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.187436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.197241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.197381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.197407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.197422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.197435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.197464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.207375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.207533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.207559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.207574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.207587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.207616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.217371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.217536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.217561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.217583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.217596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.217625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.227337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.227492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.227517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.227532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.227545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.227573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.237372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.237503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.237530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.237546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.237560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.237591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.247467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.247636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.247661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.247676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.247689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.247717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.257456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.257611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.257636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.257651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.257664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.257692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.267457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.267592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.267617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.267631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.267645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.267673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.277516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.277681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.277706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.277721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.277735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.277762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.287532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.287680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.287705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.287720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.287733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.287762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.297589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.297757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.620 [2024-07-12 16:03:01.297783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.620 [2024-07-12 16:03:01.297797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.620 [2024-07-12 16:03:01.297811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.620 [2024-07-12 16:03:01.297839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.620 qpair failed and we were unable to recover it. 00:26:31.620 [2024-07-12 16:03:01.307590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.620 [2024-07-12 16:03:01.307723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.621 [2024-07-12 16:03:01.307748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.621 [2024-07-12 16:03:01.307769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.621 [2024-07-12 16:03:01.307783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.621 [2024-07-12 16:03:01.307812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.621 qpair failed and we were unable to recover it. 00:26:31.621 [2024-07-12 16:03:01.317593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.621 [2024-07-12 16:03:01.317729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.621 [2024-07-12 16:03:01.317754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.621 [2024-07-12 16:03:01.317769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.621 [2024-07-12 16:03:01.317782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.621 [2024-07-12 16:03:01.317810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.621 qpair failed and we were unable to recover it. 00:26:31.621 [2024-07-12 16:03:01.327653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.621 [2024-07-12 16:03:01.327787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.621 [2024-07-12 16:03:01.327812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.621 [2024-07-12 16:03:01.327827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.621 [2024-07-12 16:03:01.327840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.621 [2024-07-12 16:03:01.327868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.621 qpair failed and we were unable to recover it. 00:26:31.621 [2024-07-12 16:03:01.337654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.621 [2024-07-12 16:03:01.337785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.621 [2024-07-12 16:03:01.337811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.621 [2024-07-12 16:03:01.337826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.621 [2024-07-12 16:03:01.337840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.621 [2024-07-12 16:03:01.337868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.621 qpair failed and we were unable to recover it. 00:26:31.880 [2024-07-12 16:03:01.347671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.880 [2024-07-12 16:03:01.347814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.880 [2024-07-12 16:03:01.347843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.880 [2024-07-12 16:03:01.347860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.880 [2024-07-12 16:03:01.347874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.880 [2024-07-12 16:03:01.347903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.880 qpair failed and we were unable to recover it. 00:26:31.880 [2024-07-12 16:03:01.357751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.880 [2024-07-12 16:03:01.357877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.880 [2024-07-12 16:03:01.357903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.880 [2024-07-12 16:03:01.357918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.880 [2024-07-12 16:03:01.357931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.880 [2024-07-12 16:03:01.357959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.880 qpair failed and we were unable to recover it. 00:26:31.880 [2024-07-12 16:03:01.367784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.880 [2024-07-12 16:03:01.367957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.880 [2024-07-12 16:03:01.367982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.880 [2024-07-12 16:03:01.367996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.880 [2024-07-12 16:03:01.368010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.880 [2024-07-12 16:03:01.368038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.880 qpair failed and we were unable to recover it. 00:26:31.880 [2024-07-12 16:03:01.377926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.880 [2024-07-12 16:03:01.378063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.880 [2024-07-12 16:03:01.378089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.880 [2024-07-12 16:03:01.378103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.880 [2024-07-12 16:03:01.378117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.880 [2024-07-12 16:03:01.378145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.880 qpair failed and we were unable to recover it. 00:26:31.880 [2024-07-12 16:03:01.387849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.387974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.387999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.388014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.388027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.388056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.397890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.398016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.398046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.398062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.398076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.398104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.407897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.408034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.408061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.408076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.408089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.408118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.417912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.418093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.418120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.418135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.418148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.418177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.427898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.428021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.428046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.428061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.428074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.428102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.437937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.438064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.438089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.438104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.438117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.438151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.447959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.448114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.448140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.448154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.448168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.448196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.457978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.458112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.458138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.458153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.458166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.458195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.468003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.468127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.468153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.468167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.468181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.468209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.478023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.478152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.478178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.478192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.478206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.478234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.488116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.488259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.488289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.488305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.881 [2024-07-12 16:03:01.488327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.881 [2024-07-12 16:03:01.488358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.881 qpair failed and we were unable to recover it. 00:26:31.881 [2024-07-12 16:03:01.498085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.881 [2024-07-12 16:03:01.498214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.881 [2024-07-12 16:03:01.498239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.881 [2024-07-12 16:03:01.498254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.498267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.498295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.508123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.508246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.508272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.508286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.508299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.508335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.518233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.518363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.518389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.518404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.518417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.518446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.528193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.528333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.528362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.528378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.528391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.528426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.538200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.538334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.538360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.538375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.538388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.538416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.548225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.548353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.548379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.548394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.548407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.548435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.558275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.558411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.558437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.558452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.558466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.558494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.568305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.568460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.568485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.568499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.568513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.568541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.578333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.578472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.578501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.578516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.578528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.578556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.588338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.588472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.588498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.588513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.588526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.588554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:31.882 [2024-07-12 16:03:01.598398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.882 [2024-07-12 16:03:01.598525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.882 [2024-07-12 16:03:01.598550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.882 [2024-07-12 16:03:01.598565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.882 [2024-07-12 16:03:01.598578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:31.882 [2024-07-12 16:03:01.598606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:31.882 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.608416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.608550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.608575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.608590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.608603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.608631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.618453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.618584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.618609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.618624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.618638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.618674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.628484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.628654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.628679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.628694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.628708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.628736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.638536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.638687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.638713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.638727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.638740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.638768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.648576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.648717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.648742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.648756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.648770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.648797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.658566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.658696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.658721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.658736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.658749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.658777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.668613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.668740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.668770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.668785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.668798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.668826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.678624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.678746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.678771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.678787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.678800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.678828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.688660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.688794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.688819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.688834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.688847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.688876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.698684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.698813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.698839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.698854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.698867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.698895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.708699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.708841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.708866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.708880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.708899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.708927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.718740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.718871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.718896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.718911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.718924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.718952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.728754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.728884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.728909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.728923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.728937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.148 [2024-07-12 16:03:01.728965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.148 qpair failed and we were unable to recover it. 00:26:32.148 [2024-07-12 16:03:01.738795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.148 [2024-07-12 16:03:01.738944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.148 [2024-07-12 16:03:01.738972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.148 [2024-07-12 16:03:01.738990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.148 [2024-07-12 16:03:01.739003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.739033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.748825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.748979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.749005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.749020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.749033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.749061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.758885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.759037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.759062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.759077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.759091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.759119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.768866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.768997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.769021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.769036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.769050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.769078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.778928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.779065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.779091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.779105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.779119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.779147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.788928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.789056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.789082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.789097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.789110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.789139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.798948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.799117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.799142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.799157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.799175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.799204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.808984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.809117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.809147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.809162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.809175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.809203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.819021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.819154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.819180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.819194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.819207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.819235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.829036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.829167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.829192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.829206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.829220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.829248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.839057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.839182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.839207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.839222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.839235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.839263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.849095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.849245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.849270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.849285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.849298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.849333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.859113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.859243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.859268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.859284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.859297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.859334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.149 [2024-07-12 16:03:01.869166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.149 [2024-07-12 16:03:01.869291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.149 [2024-07-12 16:03:01.869323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.149 [2024-07-12 16:03:01.869340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.149 [2024-07-12 16:03:01.869354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.149 [2024-07-12 16:03:01.869384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.149 qpair failed and we were unable to recover it. 00:26:32.409 [2024-07-12 16:03:01.879155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.409 [2024-07-12 16:03:01.879282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.409 [2024-07-12 16:03:01.879325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.409 [2024-07-12 16:03:01.879341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.409 [2024-07-12 16:03:01.879355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.409 [2024-07-12 16:03:01.879384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.409 qpair failed and we were unable to recover it. 00:26:32.409 [2024-07-12 16:03:01.889269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.409 [2024-07-12 16:03:01.889416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.409 [2024-07-12 16:03:01.889441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.409 [2024-07-12 16:03:01.889462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.409 [2024-07-12 16:03:01.889477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.409 [2024-07-12 16:03:01.889505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.409 qpair failed and we were unable to recover it. 00:26:32.409 [2024-07-12 16:03:01.899240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.409 [2024-07-12 16:03:01.899369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.409 [2024-07-12 16:03:01.899395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.409 [2024-07-12 16:03:01.899410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.409 [2024-07-12 16:03:01.899423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.409 [2024-07-12 16:03:01.899451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.409 qpair failed and we were unable to recover it. 00:26:32.409 [2024-07-12 16:03:01.909258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.909401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.909427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.909442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.909455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.909484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.919260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.919399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.919426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.919441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.919454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.919483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.929380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.929526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.929552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.929567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.929581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.929610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.939324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.939453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.939478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.939493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.939507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.939535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.949378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.949509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.949535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.949550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.949563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.949591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.959405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.959546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.959571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.959586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.959599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.959628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.969447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.969576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.969601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.969616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.969629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.969657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.979463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.979630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.979656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.979675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.979688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.979716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.989492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.989619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.989644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.989658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.989672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.989700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:01.999517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:01.999664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:01.999690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:01.999705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:01.999718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:01.999746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:02.009588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:02.009769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:02.009794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:02.009808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:02.009822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:02.009850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:02.019606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:02.019732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:02.019758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:02.019773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:02.019786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:02.019814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:02.029593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:02.029743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:02.029768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:02.029783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:02.029796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:02.029824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:02.039607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:02.039735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:02.039761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:02.039775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:02.039789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:02.039817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.410 qpair failed and we were unable to recover it. 00:26:32.410 [2024-07-12 16:03:02.049660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.410 [2024-07-12 16:03:02.049790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.410 [2024-07-12 16:03:02.049815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.410 [2024-07-12 16:03:02.049829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.410 [2024-07-12 16:03:02.049843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.410 [2024-07-12 16:03:02.049871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.059762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.059888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.059913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.059927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.059940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.059968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.069721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.069852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.069877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.069898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.069911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.069940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.079713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.079839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.079865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.079879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.079892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.079920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.089760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.089903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.089928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.089942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.089955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.089984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.099766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.099892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.099916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.099931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.099944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.099972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.109803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.109935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.109959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.109975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.109987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.110016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.119865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.119994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.120019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.120034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.120047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.120075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.411 [2024-07-12 16:03:02.129905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.411 [2024-07-12 16:03:02.130077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.411 [2024-07-12 16:03:02.130102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.411 [2024-07-12 16:03:02.130116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.411 [2024-07-12 16:03:02.130129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.411 [2024-07-12 16:03:02.130158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.411 qpair failed and we were unable to recover it. 00:26:32.669 [2024-07-12 16:03:02.139890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.669 [2024-07-12 16:03:02.140030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.669 [2024-07-12 16:03:02.140056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.669 [2024-07-12 16:03:02.140071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.669 [2024-07-12 16:03:02.140084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.669 [2024-07-12 16:03:02.140112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.669 qpair failed and we were unable to recover it. 00:26:32.669 [2024-07-12 16:03:02.149980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.669 [2024-07-12 16:03:02.150154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.669 [2024-07-12 16:03:02.150182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.669 [2024-07-12 16:03:02.150198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.150211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.150240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.159975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.160098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.160128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.160146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.160159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.160187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.169988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.170136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.170162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.170181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.170195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.170225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.180006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.180137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.180163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.180178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.180191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.180219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.190070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.190245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.190270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.190285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.190298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.190335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.200080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.200213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.200239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.200254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.200267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.200296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.210113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.210264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.210289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.210304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.210323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.210353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.220166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.220339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.220365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.220379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.220392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.220421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.230155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.230282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.230308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.230336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.230350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.230380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.240160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.240286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.240311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.240337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.240351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.240380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.250221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.250366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.250397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.250412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.250426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.250454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.260231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.260376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.260401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.260416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.260429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.260458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.270237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.270367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.270393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.270408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.270421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.270449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.280332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.280491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.280516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.280531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.280543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.280572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.290364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.670 [2024-07-12 16:03:02.290498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.670 [2024-07-12 16:03:02.290523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.670 [2024-07-12 16:03:02.290538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.670 [2024-07-12 16:03:02.290551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.670 [2024-07-12 16:03:02.290585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.670 qpair failed and we were unable to recover it. 00:26:32.670 [2024-07-12 16:03:02.300364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.300543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.300568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.300583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.300597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.300625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.310416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.310549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.310574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.310589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.310602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.310630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.320411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.320529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.320553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.320568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.320581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.320609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.330458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.330634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.330659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.330674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.330687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.330715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.340472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.340610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.340640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.340659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.340672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.340700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.350495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.350621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.350647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.350662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.350675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.350704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.360539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.360671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.360696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.360711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.360724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.360752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.370576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.370713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.370739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.370755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.370768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.370798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.380659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.380797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.380822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.380837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.380850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.380887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.671 [2024-07-12 16:03:02.390595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.671 [2024-07-12 16:03:02.390734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.671 [2024-07-12 16:03:02.390760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.671 [2024-07-12 16:03:02.390775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.671 [2024-07-12 16:03:02.390788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.671 [2024-07-12 16:03:02.390816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.671 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.400672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.400794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.400820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.400835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.400849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.400877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.410660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.410789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.410815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.410830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.410843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.410872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.420673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.420816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.420841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.420856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.420869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.420897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.430701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.430834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.430864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.430880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.430893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.430921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.440768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.440919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.440946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.440961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.440981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.441014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.450796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.450957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.450982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.450998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.451011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.451041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.460797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.460940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.460966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.460981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.460994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.461022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.470843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.470975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.471001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.471017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.471036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.471065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.480839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.480995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.481020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.481035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.481048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.481077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.491014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.491151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.491176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.491191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.491204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.491232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.500983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.501139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.501164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.501178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.501191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.501219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.510957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.511082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.511107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.511122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.511135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.511163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.520989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.521121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.521145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.521160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.521173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.521202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.531025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.531163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.531190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.531209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.531224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.930 [2024-07-12 16:03:02.531253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.930 qpair failed and we were unable to recover it. 00:26:32.930 [2024-07-12 16:03:02.541020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.930 [2024-07-12 16:03:02.541154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.930 [2024-07-12 16:03:02.541180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.930 [2024-07-12 16:03:02.541195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.930 [2024-07-12 16:03:02.541208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.541236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.551074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.551201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.551226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.551241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.551254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.551282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.561069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.561207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.561232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.561247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.561265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.561294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.571146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.571324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.571349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.571364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.571377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.571405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.581263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.581399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.581424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.581438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.581450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.581478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.591183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.591306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.591338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.591354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.591367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.591396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.601217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.601367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.601393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.601407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.601420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.601449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.611240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.611401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.611426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.611441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.611455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.611483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.621242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.621378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.621404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.621419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.621432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.621461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.631301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.631436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.631461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.631476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.631489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.631517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.641290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.641435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.641460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.641475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.641487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.641516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:32.931 [2024-07-12 16:03:02.651375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.931 [2024-07-12 16:03:02.651505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.931 [2024-07-12 16:03:02.651531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.931 [2024-07-12 16:03:02.651546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.931 [2024-07-12 16:03:02.651564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:32.931 [2024-07-12 16:03:02.651593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:32.931 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.661401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.661529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.661554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.661568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.661581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.661609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.671469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.671614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.671642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.671657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.671670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.671700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.681448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.681579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.681604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.681619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.681632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.681660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.691475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.691610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.691636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.691651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.691664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.691692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.701544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.701715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.701740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.701754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.701767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.701795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.711508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.711639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.711665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.711679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.711692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.711721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.721532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.721677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.721703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.721717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.721730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.721758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.731578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.731706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.731731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.731746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.731759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.731786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.741600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.741727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.741752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.741773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.741787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.741815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.751653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.751796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.751821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.751836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.751848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.751877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.761667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.761787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.761812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.761827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.761840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.761867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.771712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.771842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.771867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.771882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.771895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.771923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.190 [2024-07-12 16:03:02.781768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.190 [2024-07-12 16:03:02.781925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.190 [2024-07-12 16:03:02.781952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.190 [2024-07-12 16:03:02.781967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.190 [2024-07-12 16:03:02.781983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.190 [2024-07-12 16:03:02.782014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.190 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.791778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.791906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.791931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.791947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.791960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.791989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.801851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.802022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.802047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.802062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.802075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.802103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.811903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.812070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.812095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.812110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.812124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.812152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.821802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.821929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.821954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.821969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.821983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.822011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.831899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.832056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.832082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.832102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.832118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.832148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.841932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.842069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.842093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.842108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.842121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.842149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.851939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.852089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.852115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.852130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.852143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.852170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.861963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.862092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.862117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.862132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.862145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.862173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.871971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.872115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.872140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.872155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.872168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.872197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.881972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.882096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.882122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.882136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.882149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.882178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.892027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.892166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.892191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.892206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.892219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.892248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.902072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.902200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.902225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.902240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.902253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.902282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.191 [2024-07-12 16:03:02.912088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.191 [2024-07-12 16:03:02.912225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.191 [2024-07-12 16:03:02.912253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.191 [2024-07-12 16:03:02.912270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.191 [2024-07-12 16:03:02.912283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.191 [2024-07-12 16:03:02.912311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.191 qpair failed and we were unable to recover it. 00:26:33.449 [2024-07-12 16:03:02.922237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.449 [2024-07-12 16:03:02.922376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.449 [2024-07-12 16:03:02.922407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.449 [2024-07-12 16:03:02.922423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.449 [2024-07-12 16:03:02.922437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.449 [2024-07-12 16:03:02.922466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.449 qpair failed and we were unable to recover it. 00:26:33.449 [2024-07-12 16:03:02.932147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.449 [2024-07-12 16:03:02.932280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.449 [2024-07-12 16:03:02.932306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.449 [2024-07-12 16:03:02.932331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.449 [2024-07-12 16:03:02.932345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.449 [2024-07-12 16:03:02.932376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.449 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:02.942152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:02.942282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:02.942307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:02.942333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:02.942347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:02.942376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:02.952177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:02.952299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:02.952331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:02.952347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:02.952360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:02.952389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:02.962249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:02.962381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:02.962407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:02.962422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:02.962435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:02.962463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:02.972253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:02.972390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:02.972416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:02.972431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:02.972443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:02.972471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:02.982274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:02.982410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:02.982435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:02.982450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:02.982463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:02.982490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:02.992305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:02.992453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:02.992479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:02.992494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:02.992506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:02.992534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.002372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.002550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.002575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.002590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.002603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.002631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.012396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.012565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.012595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.012611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.012625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.012653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.022392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.022526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.022551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.022566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.022579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.022607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.032436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.032563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.032589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.032604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.032617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.032645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.042460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.042588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.042613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.042628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.042642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.042670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.052551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.052680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.052705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.052721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.052734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.052768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.062503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.062628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.062653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.062668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.062681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.062711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.072569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.072751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.072776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.072790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.072804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.072832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.450 [2024-07-12 16:03:03.082553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.450 [2024-07-12 16:03:03.082678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.450 [2024-07-12 16:03:03.082703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.450 [2024-07-12 16:03:03.082717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.450 [2024-07-12 16:03:03.082730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.450 [2024-07-12 16:03:03.082758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.450 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.092624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.092775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.092802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.092817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.092830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.092859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.102616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.102751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.102781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.102797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.102810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.102839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.112660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.112797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.112823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.112838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.112851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.112879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.122668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.122806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.122832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.122846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.122860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.122888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.132707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.132839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.132865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.132880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.132892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.132921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.142723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.142855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.142881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.142896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.142909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.142946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.152786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.152929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.152954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.152969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.152983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.153010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.162772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.162896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.162921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.162936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.162949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.162977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.451 [2024-07-12 16:03:03.172814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.451 [2024-07-12 16:03:03.172947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.451 [2024-07-12 16:03:03.172972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.451 [2024-07-12 16:03:03.172987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.451 [2024-07-12 16:03:03.173001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.451 [2024-07-12 16:03:03.173029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.451 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.182837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.182988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.183014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.183029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.183042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.183070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.192892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.193018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.193048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.193064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.193077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.193105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.202914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.203050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.203075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.203090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.203103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.203132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.212984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.213114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.213140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.213155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.213168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.213196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.222942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.223067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.223111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.223126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.223139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.223168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.233026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.233151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.233176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.233191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.233210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.233238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.243022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.243151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.243177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.243192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.243205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.243236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.253081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.253210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.253236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.253250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.253264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.253294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.263092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.263226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.263251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.263265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.263279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.263306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.273092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.273220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.709 [2024-07-12 16:03:03.273245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.709 [2024-07-12 16:03:03.273259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.709 [2024-07-12 16:03:03.273273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.709 [2024-07-12 16:03:03.273301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.709 qpair failed and we were unable to recover it. 00:26:33.709 [2024-07-12 16:03:03.283132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.709 [2024-07-12 16:03:03.283324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.283349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.283363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.283378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.283406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.293167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.293302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.293335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.293350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.293363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.293392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.303206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.303347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.303372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.303387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.303401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.303429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.313195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.313353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.313378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.313393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.313406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.313434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.323268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.323417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.323446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.323462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.323481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.323510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.333289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.333428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.333453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.333468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.333481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.333509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.343340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.343473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.343498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.343513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.343526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.343555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.353335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.353467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.353493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.353508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.353521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.353550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.363397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.363568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.363594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.363609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.363622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.363650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.373381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.373523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.373548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.373563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.373576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.373604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.383418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.383553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.383578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.383592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.383605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.383633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.393430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.393557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.393582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.393597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.393611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.393638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.403483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.403653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.403678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.403693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.403706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.403734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.413534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.413686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.413713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.413728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.413747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.710 [2024-07-12 16:03:03.413776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.710 qpair failed and we were unable to recover it. 00:26:33.710 [2024-07-12 16:03:03.423506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.710 [2024-07-12 16:03:03.423635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.710 [2024-07-12 16:03:03.423662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.710 [2024-07-12 16:03:03.423677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.710 [2024-07-12 16:03:03.423690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.711 [2024-07-12 16:03:03.423718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.711 qpair failed and we were unable to recover it. 00:26:33.711 [2024-07-12 16:03:03.433578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.711 [2024-07-12 16:03:03.433709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.711 [2024-07-12 16:03:03.433734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.711 [2024-07-12 16:03:03.433749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.711 [2024-07-12 16:03:03.433762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.711 [2024-07-12 16:03:03.433790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.711 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.443573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.443698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.443723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.443737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.443751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.443779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.453636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.453772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.453801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.453818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.453832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.453861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.463635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.463774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.463800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.463815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.463828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.463856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.473722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.473870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.473895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.473909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.473923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.473951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.483718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.483889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.483914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.483928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.483941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.483969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.493738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.493910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.493936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.493950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.493963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.493991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.503768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.503905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.503932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.503956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.503970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.504000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.513780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.513909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.513935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.513950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.513963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.513992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.523874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.524038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.524064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.524078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.524092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.524120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.533852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.533997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.534022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.534037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.534050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.534078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.543914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.544077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.544103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.544117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.544130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.544159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.553937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.554067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.554093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.554108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.554121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.554149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.563918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.564042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.564068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.564083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.564096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.564124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.573966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.574099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.574125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.574140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.574153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.574181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.583996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.584172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.584195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.970 [2024-07-12 16:03:03.584209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.970 [2024-07-12 16:03:03.584222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.970 [2024-07-12 16:03:03.584250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.970 qpair failed and we were unable to recover it. 00:26:33.970 [2024-07-12 16:03:03.594027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.970 [2024-07-12 16:03:03.594177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.970 [2024-07-12 16:03:03.594202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.594223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.594237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.594267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.604020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.604141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.604166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.604181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.604194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.604222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.614151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.614332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.614358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.614373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.614386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.614415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.624113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.624243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.624268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.624283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.624296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.624331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.634148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.634277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.634303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.634328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.634343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.634372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.644153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.644289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.644323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.644343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.644357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.644387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.654184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.654313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.654345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.654359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.654372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.654400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.664198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.664335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.664361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.664375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.664388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.664417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.674232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.674368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.674394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.674410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.674424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.674454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.684247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.684380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.684405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.684426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.684439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1c5b3f0 00:26:33.971 [2024-07-12 16:03:03.684468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:33.971 qpair failed and we were unable to recover it. 00:26:33.971 [2024-07-12 16:03:03.694335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.971 [2024-07-12 16:03:03.694490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.971 [2024-07-12 16:03:03.694522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.971 [2024-07-12 16:03:03.694541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.971 [2024-07-12 16:03:03.694555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe0c000b90 00:26:33.971 [2024-07-12 16:03:03.694601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:33.971 qpair failed and we were unable to recover it. 00:26:34.229 [2024-07-12 16:03:03.704372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.229 [2024-07-12 16:03:03.704511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.229 [2024-07-12 16:03:03.704540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.229 [2024-07-12 16:03:03.704555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.229 [2024-07-12 16:03:03.704569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe0c000b90 00:26:34.229 [2024-07-12 16:03:03.704601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:34.229 qpair failed and we were unable to recover it. 00:26:34.229 [2024-07-12 16:03:03.714396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.229 [2024-07-12 16:03:03.714549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.229 [2024-07-12 16:03:03.714581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.229 [2024-07-12 16:03:03.714599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.229 [2024-07-12 16:03:03.714613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe04000b90 00:26:34.229 [2024-07-12 16:03:03.714647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.229 qpair failed and we were unable to recover it. 00:26:34.229 [2024-07-12 16:03:03.724481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.229 [2024-07-12 16:03:03.724615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.229 [2024-07-12 16:03:03.724643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.229 [2024-07-12 16:03:03.724658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.229 [2024-07-12 16:03:03.724672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe04000b90 00:26:34.229 [2024-07-12 16:03:03.724703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:34.229 qpair failed and we were unable to recover it. 00:26:34.229 [2024-07-12 16:03:03.734445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.229 [2024-07-12 16:03:03.734603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.229 [2024-07-12 16:03:03.734636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.229 [2024-07-12 16:03:03.734653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.229 [2024-07-12 16:03:03.734667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe14000b90 00:26:34.229 [2024-07-12 16:03:03.734698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.229 qpair failed and we were unable to recover it. 00:26:34.229 [2024-07-12 16:03:03.744456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.229 [2024-07-12 16:03:03.744587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.229 [2024-07-12 16:03:03.744614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.229 [2024-07-12 16:03:03.744630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.229 [2024-07-12 16:03:03.744644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7efe14000b90 00:26:34.229 [2024-07-12 16:03:03.744675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.229 qpair failed and we were unable to recover it. 00:26:34.229 [2024-07-12 16:03:03.744768] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:34.229 A controller has encountered a failure and is being reset. 00:26:34.229 Controller properly reset. 00:26:34.229 Initializing NVMe Controllers 00:26:34.229 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:34.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:34.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:34.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:34.229 Initialization complete. Launching workers. 00:26:34.229 Starting thread on core 1 00:26:34.229 Starting thread on core 2 00:26:34.229 Starting thread on core 3 00:26:34.229 Starting thread on core 0 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:34.229 00:26:34.229 real 0m10.778s 00:26:34.229 user 0m18.384s 00:26:34.229 sys 0m5.233s 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:34.229 ************************************ 00:26:34.229 END TEST nvmf_target_disconnect_tc2 00:26:34.229 ************************************ 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.229 rmmod nvme_tcp 00:26:34.229 rmmod nvme_fabrics 00:26:34.229 rmmod nvme_keyring 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 134192 ']' 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 134192 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 134192 ']' 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 134192 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134192 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134192' 00:26:34.229 killing process with pid 134192 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 134192 00:26:34.229 16:03:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 134192 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.487 16:03:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.015 16:03:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:37.015 00:26:37.015 real 0m15.567s 00:26:37.015 user 0m44.253s 00:26:37.015 sys 0m7.207s 00:26:37.015 16:03:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.015 16:03:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:37.015 ************************************ 00:26:37.015 END TEST nvmf_target_disconnect 00:26:37.015 ************************************ 00:26:37.015 16:03:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:37.015 16:03:06 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:37.015 16:03:06 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:37.015 16:03:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.015 16:03:06 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:37.015 00:26:37.015 real 19m12.344s 00:26:37.015 user 44m55.558s 00:26:37.015 sys 5m1.363s 00:26:37.015 16:03:06 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:37.015 16:03:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.015 ************************************ 00:26:37.015 END TEST nvmf_tcp 00:26:37.015 ************************************ 00:26:37.015 16:03:06 -- common/autotest_common.sh@1142 -- # return 0 00:26:37.015 16:03:06 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:26:37.015 16:03:06 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:37.015 16:03:06 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:37.015 16:03:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:37.015 16:03:06 -- common/autotest_common.sh@10 -- # set +x 00:26:37.015 ************************************ 00:26:37.015 START TEST spdkcli_nvmf_tcp 00:26:37.015 ************************************ 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:37.015 * Looking for test storage... 00:26:37.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:37.015 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=135497 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 135497 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 135497 ']' 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.016 [2024-07-12 16:03:06.415469] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:26:37.016 [2024-07-12 16:03:06.415560] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135497 ] 00:26:37.016 EAL: No free 2048 kB hugepages reported on node 1 00:26:37.016 [2024-07-12 16:03:06.471347] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:37.016 [2024-07-12 16:03:06.578173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.016 [2024-07-12 16:03:06.578188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:37.016 16:03:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:37.016 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:37.016 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:37.016 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:37.016 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:37.016 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:37.016 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:37.016 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:37.016 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:37.016 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:37.016 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:37.016 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:37.016 ' 00:26:39.541 [2024-07-12 16:03:09.241991] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.911 [2024-07-12 16:03:10.522390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:43.433 [2024-07-12 16:03:12.829525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:45.328 [2024-07-12 16:03:14.775582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:46.701 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:46.701 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:46.701 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:46.701 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:46.701 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:46.701 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:46.701 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:46.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:46.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:46.701 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:46.701 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:46.701 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:46.701 16:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.270 16:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:47.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:47.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:47.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:47.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:47.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:47.270 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:47.270 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:47.270 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:47.270 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:47.270 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:47.270 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:47.271 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:47.271 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:47.271 ' 00:26:52.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:52.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:52.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:52.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:52.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:52.532 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:52.532 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:52.532 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:52.532 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:52.532 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:52.532 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:52.532 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:52.532 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:52.532 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 135497 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 135497 ']' 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 135497 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135497 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135497' 00:26:52.532 killing process with pid 135497 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 135497 00:26:52.532 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 135497 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 135497 ']' 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 135497 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 135497 ']' 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 135497 00:26:52.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (135497) - No such process 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 135497 is not found' 00:26:52.790 Process with pid 135497 is not found 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:52.790 00:26:52.790 real 0m16.121s 00:26:52.790 user 0m34.123s 00:26:52.790 sys 0m0.814s 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:52.790 16:03:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:52.790 ************************************ 00:26:52.790 END TEST spdkcli_nvmf_tcp 00:26:52.790 ************************************ 00:26:52.790 16:03:22 -- common/autotest_common.sh@1142 -- # return 0 00:26:52.790 16:03:22 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:52.790 16:03:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:52.790 16:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.790 16:03:22 -- common/autotest_common.sh@10 -- # set +x 00:26:52.790 ************************************ 00:26:52.790 START TEST nvmf_identify_passthru 00:26:52.790 ************************************ 00:26:52.790 16:03:22 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:53.051 * Looking for test storage... 00:26:53.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.051 16:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.051 16:03:22 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.051 16:03:22 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.051 16:03:22 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.051 16:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.051 16:03:22 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.051 16:03:22 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.051 16:03:22 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:53.051 16:03:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.051 16:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.051 16:03:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:53.051 16:03:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.051 16:03:22 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.051 16:03:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:55.579 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.579 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:55.580 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:55.580 Found net devices under 0000:09:00.0: cvl_0_0 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:55.580 Found net devices under 0000:09:00.1: cvl_0_1 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:26:55.580 00:26:55.580 --- 10.0.0.2 ping statistics --- 00:26:55.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.580 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:26:55.580 00:26:55.580 --- 10.0.0.1 ping statistics --- 00:26:55.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.580 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.580 16:03:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.580 16:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:55.580 16:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:0b:00.0 00:26:55.580 16:03:24 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:0b:00.0 00:26:55.580 16:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:26:55.580 16:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:26:55.580 16:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:26:55.580 16:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:55.580 16:03:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:55.580 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.827 16:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:26:59.827 16:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:26:59.827 16:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:59.827 16:03:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:59.827 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.006 16:03:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:04.006 16:03:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:04.006 16:03:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:04.006 16:03:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=140630 00:27:04.006 16:03:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:04.006 16:03:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:04.006 16:03:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 140630 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 140630 ']' 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.006 16:03:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:04.006 [2024-07-12 16:03:33.316866] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:27:04.006 [2024-07-12 16:03:33.316959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.006 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.006 [2024-07-12 16:03:33.383014] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:04.006 [2024-07-12 16:03:33.490369] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.006 [2024-07-12 16:03:33.490435] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.006 [2024-07-12 16:03:33.490463] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.006 [2024-07-12 16:03:33.490475] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.006 [2024-07-12 16:03:33.490485] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.006 [2024-07-12 16:03:33.490536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.006 [2024-07-12 16:03:33.490560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:04.006 [2024-07-12 16:03:33.490618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:04.006 [2024-07-12 16:03:33.490621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.570 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.570 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:04.570 16:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:04.570 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.570 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:04.570 INFO: Log level set to 20 00:27:04.570 INFO: Requests: 00:27:04.570 { 00:27:04.570 "jsonrpc": "2.0", 00:27:04.570 "method": "nvmf_set_config", 00:27:04.570 "id": 1, 00:27:04.570 "params": { 00:27:04.570 "admin_cmd_passthru": { 00:27:04.570 "identify_ctrlr": true 00:27:04.570 } 00:27:04.570 } 00:27:04.570 } 00:27:04.570 00:27:04.827 INFO: response: 00:27:04.827 { 00:27:04.827 "jsonrpc": "2.0", 00:27:04.827 "id": 1, 00:27:04.827 "result": true 00:27:04.827 } 00:27:04.827 00:27:04.827 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.827 16:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:04.827 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:04.828 INFO: Setting log level to 20 00:27:04.828 INFO: Setting log level to 20 00:27:04.828 INFO: Log level set to 20 00:27:04.828 INFO: Log level set to 20 00:27:04.828 INFO: Requests: 00:27:04.828 { 00:27:04.828 "jsonrpc": "2.0", 00:27:04.828 "method": "framework_start_init", 00:27:04.828 "id": 1 00:27:04.828 } 00:27:04.828 00:27:04.828 INFO: Requests: 00:27:04.828 { 00:27:04.828 "jsonrpc": "2.0", 00:27:04.828 "method": "framework_start_init", 00:27:04.828 "id": 1 00:27:04.828 } 00:27:04.828 00:27:04.828 [2024-07-12 16:03:34.404743] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:04.828 INFO: response: 00:27:04.828 { 00:27:04.828 "jsonrpc": "2.0", 00:27:04.828 "id": 1, 00:27:04.828 "result": true 00:27:04.828 } 00:27:04.828 00:27:04.828 INFO: response: 00:27:04.828 { 00:27:04.828 "jsonrpc": "2.0", 00:27:04.828 "id": 1, 00:27:04.828 "result": true 00:27:04.828 } 00:27:04.828 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.828 16:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:04.828 INFO: Setting log level to 40 00:27:04.828 INFO: Setting log level to 40 00:27:04.828 INFO: Setting log level to 40 00:27:04.828 [2024-07-12 16:03:34.414935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.828 16:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:04.828 16:03:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.828 16:03:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.103 Nvme0n1 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.103 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.103 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.103 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.103 [2024-07-12 16:03:37.302425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.103 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.103 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.103 [ 00:27:08.103 { 00:27:08.103 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:08.103 "subtype": "Discovery", 00:27:08.103 "listen_addresses": [], 00:27:08.104 "allow_any_host": true, 00:27:08.104 "hosts": [] 00:27:08.104 }, 00:27:08.104 { 00:27:08.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.104 "subtype": "NVMe", 00:27:08.104 "listen_addresses": [ 00:27:08.104 { 00:27:08.104 "trtype": "TCP", 00:27:08.104 "adrfam": "IPv4", 00:27:08.104 "traddr": "10.0.0.2", 00:27:08.104 "trsvcid": "4420" 00:27:08.104 } 00:27:08.104 ], 00:27:08.104 "allow_any_host": true, 00:27:08.104 "hosts": [], 00:27:08.104 "serial_number": "SPDK00000000000001", 00:27:08.104 "model_number": "SPDK bdev Controller", 00:27:08.104 "max_namespaces": 1, 00:27:08.104 "min_cntlid": 1, 00:27:08.104 "max_cntlid": 65519, 00:27:08.104 "namespaces": [ 00:27:08.104 { 00:27:08.104 "nsid": 1, 00:27:08.104 "bdev_name": "Nvme0n1", 00:27:08.104 "name": "Nvme0n1", 00:27:08.104 "nguid": "5F86082D1086484DB0A37C836533DD21", 00:27:08.104 "uuid": "5f86082d-1086-484d-b0a3-7c836533dd21" 00:27:08.104 } 00:27:08.104 ] 00:27:08.104 } 00:27:08.104 ] 00:27:08.104 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.104 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:08.104 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:08.104 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:08.104 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.104 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:27:08.104 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:08.104 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:08.104 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:08.104 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.360 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:08.360 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:27:08.360 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:08.360 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.360 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:08.360 16:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.360 rmmod nvme_tcp 00:27:08.360 rmmod nvme_fabrics 00:27:08.360 rmmod nvme_keyring 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 140630 ']' 00:27:08.360 16:03:37 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 140630 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 140630 ']' 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 140630 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140630 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140630' 00:27:08.360 killing process with pid 140630 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 140630 00:27:08.360 16:03:37 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 140630 00:27:10.256 16:03:39 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.256 16:03:39 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.256 16:03:39 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.256 16:03:39 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.256 16:03:39 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.256 16:03:39 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.256 16:03:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:10.256 16:03:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.159 16:03:41 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.159 00:27:12.159 real 0m19.054s 00:27:12.159 user 0m30.450s 00:27:12.159 sys 0m2.548s 00:27:12.159 16:03:41 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.159 16:03:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:12.159 ************************************ 00:27:12.159 END TEST nvmf_identify_passthru 00:27:12.159 ************************************ 00:27:12.159 16:03:41 -- common/autotest_common.sh@1142 -- # return 0 00:27:12.159 16:03:41 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:12.159 16:03:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:12.159 16:03:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.159 16:03:41 -- common/autotest_common.sh@10 -- # set +x 00:27:12.159 ************************************ 00:27:12.159 START TEST nvmf_dif 00:27:12.159 ************************************ 00:27:12.159 16:03:41 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:12.159 * Looking for test storage... 00:27:12.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:12.159 16:03:41 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.159 16:03:41 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.159 16:03:41 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.159 16:03:41 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.159 16:03:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.159 16:03:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.159 16:03:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.159 16:03:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:12.159 16:03:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:12.159 16:03:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:12.159 16:03:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:12.159 16:03:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:12.159 16:03:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:12.159 16:03:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.159 16:03:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:12.159 16:03:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.159 16:03:41 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.159 16:03:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:14.684 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:14.684 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:14.684 Found net devices under 0000:09:00.0: cvl_0_0 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:14.684 Found net devices under 0000:09:00.1: cvl_0_1 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:14.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:27:14.684 00:27:14.684 --- 10.0.0.2 ping statistics --- 00:27:14.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.684 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:14.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:27:14.684 00:27:14.684 --- 10.0.0.1 ping statistics --- 00:27:14.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.684 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:14.684 16:03:43 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.617 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:15.617 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:15.617 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:15.617 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:15.617 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:15.617 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:15.617 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:15.617 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:15.617 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:15.617 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:15.617 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:15.617 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:15.617 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:15.617 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:15.617 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:15.617 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:15.617 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:15.617 16:03:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:15.617 16:03:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=143905 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:15.617 16:03:45 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 143905 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 143905 ']' 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.617 16:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.617 [2024-07-12 16:03:45.278377] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:27:15.617 [2024-07-12 16:03:45.278464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.617 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.617 [2024-07-12 16:03:45.339181] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.875 [2024-07-12 16:03:45.441240] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.875 [2024-07-12 16:03:45.441292] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.875 [2024-07-12 16:03:45.441328] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.875 [2024-07-12 16:03:45.441340] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.875 [2024-07-12 16:03:45.441350] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.875 [2024-07-12 16:03:45.441392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:15.875 16:03:45 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.875 16:03:45 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.875 16:03:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:15.875 16:03:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.875 [2024-07-12 16:03:45.578087] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.875 16:03:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.875 16:03:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:16.133 ************************************ 00:27:16.133 START TEST fio_dif_1_default 00:27:16.133 ************************************ 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.133 bdev_null0 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.133 [2024-07-12 16:03:45.634369] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.133 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.133 { 00:27:16.133 "params": { 00:27:16.133 "name": "Nvme$subsystem", 00:27:16.134 "trtype": "$TEST_TRANSPORT", 00:27:16.134 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.134 "adrfam": "ipv4", 00:27:16.134 "trsvcid": "$NVMF_PORT", 00:27:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.134 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.134 "hdgst": ${hdgst:-false}, 00:27:16.134 "ddgst": ${ddgst:-false} 00:27:16.134 }, 00:27:16.134 "method": "bdev_nvme_attach_controller" 00:27:16.134 } 00:27:16.134 EOF 00:27:16.134 )") 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:16.134 "params": { 00:27:16.134 "name": "Nvme0", 00:27:16.134 "trtype": "tcp", 00:27:16.134 "traddr": "10.0.0.2", 00:27:16.134 "adrfam": "ipv4", 00:27:16.134 "trsvcid": "4420", 00:27:16.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:16.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:16.134 "hdgst": false, 00:27:16.134 "ddgst": false 00:27:16.134 }, 00:27:16.134 "method": "bdev_nvme_attach_controller" 00:27:16.134 }' 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:16.134 16:03:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.392 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:16.392 fio-3.35 00:27:16.392 Starting 1 thread 00:27:16.392 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.624 00:27:28.624 filename0: (groupid=0, jobs=1): err= 0: pid=144135: Fri Jul 12 16:03:56 2024 00:27:28.624 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10001msec) 00:27:28.624 slat (usec): min=6, max=106, avg= 8.49, stdev= 3.38 00:27:28.624 clat (usec): min=751, max=45957, avg=21028.23, stdev=20157.33 00:27:28.624 lat (usec): min=758, max=45993, avg=21036.72, stdev=20157.21 00:27:28.624 clat percentiles (usec): 00:27:28.624 | 1.00th=[ 775], 5.00th=[ 791], 10.00th=[ 807], 20.00th=[ 824], 00:27:28.624 | 30.00th=[ 840], 40.00th=[ 873], 50.00th=[40633], 60.00th=[41157], 00:27:28.624 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:28.624 | 99.00th=[41157], 99.50th=[41157], 99.90th=[45876], 99.95th=[45876], 00:27:28.624 | 99.99th=[45876] 00:27:28.624 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.26, stdev=20.18, samples=19 00:27:28.624 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:27:28.624 lat (usec) : 1000=49.89% 00:27:28.624 lat (msec) : 50=50.11% 00:27:28.624 cpu : usr=87.59%, sys=12.12%, ctx=15, majf=0, minf=294 00:27:28.624 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:28.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.624 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.624 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:28.624 00:27:28.624 Run status group 0 (all jobs): 00:27:28.624 READ: bw=760KiB/s (778kB/s), 760KiB/s-760KiB/s (778kB/s-778kB/s), io=7600KiB (7782kB), run=10001-10001msec 00:27:28.624 16:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:28.624 16:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 00:27:28.625 real 0m11.017s 00:27:28.625 user 0m9.831s 00:27:28.625 sys 0m1.477s 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 ************************************ 00:27:28.625 END TEST fio_dif_1_default 00:27:28.625 ************************************ 00:27:28.625 16:03:56 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:28.625 16:03:56 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:28.625 16:03:56 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:28.625 16:03:56 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 ************************************ 00:27:28.625 START TEST fio_dif_1_multi_subsystems 00:27:28.625 ************************************ 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 bdev_null0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 [2024-07-12 16:03:56.701391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 bdev_null1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.625 { 00:27:28.625 "params": { 00:27:28.625 "name": "Nvme$subsystem", 00:27:28.625 "trtype": "$TEST_TRANSPORT", 00:27:28.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.625 "adrfam": "ipv4", 00:27:28.625 "trsvcid": "$NVMF_PORT", 00:27:28.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.625 "hdgst": ${hdgst:-false}, 00:27:28.625 "ddgst": ${ddgst:-false} 00:27:28.625 }, 00:27:28.625 "method": "bdev_nvme_attach_controller" 00:27:28.625 } 00:27:28.625 EOF 00:27:28.625 )") 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:28.625 { 00:27:28.625 "params": { 00:27:28.625 "name": "Nvme$subsystem", 00:27:28.625 "trtype": "$TEST_TRANSPORT", 00:27:28.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:28.625 "adrfam": "ipv4", 00:27:28.625 "trsvcid": "$NVMF_PORT", 00:27:28.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:28.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:28.625 "hdgst": ${hdgst:-false}, 00:27:28.625 "ddgst": ${ddgst:-false} 00:27:28.625 }, 00:27:28.625 "method": "bdev_nvme_attach_controller" 00:27:28.625 } 00:27:28.625 EOF 00:27:28.625 )") 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:28.625 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:28.625 "params": { 00:27:28.625 "name": "Nvme0", 00:27:28.625 "trtype": "tcp", 00:27:28.625 "traddr": "10.0.0.2", 00:27:28.625 "adrfam": "ipv4", 00:27:28.625 "trsvcid": "4420", 00:27:28.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:28.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:28.625 "hdgst": false, 00:27:28.625 "ddgst": false 00:27:28.625 }, 00:27:28.625 "method": "bdev_nvme_attach_controller" 00:27:28.625 },{ 00:27:28.626 "params": { 00:27:28.626 "name": "Nvme1", 00:27:28.626 "trtype": "tcp", 00:27:28.626 "traddr": "10.0.0.2", 00:27:28.626 "adrfam": "ipv4", 00:27:28.626 "trsvcid": "4420", 00:27:28.626 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:28.626 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:28.626 "hdgst": false, 00:27:28.626 "ddgst": false 00:27:28.626 }, 00:27:28.626 "method": "bdev_nvme_attach_controller" 00:27:28.626 }' 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:28.626 16:03:56 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:28.626 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:28.626 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:28.626 fio-3.35 00:27:28.626 Starting 2 threads 00:27:28.626 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.590 00:27:38.590 filename0: (groupid=0, jobs=1): err= 0: pid=145540: Fri Jul 12 16:04:07 2024 00:27:38.590 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10005msec) 00:27:38.590 slat (nsec): min=7047, max=48724, avg=10097.64, stdev=4447.95 00:27:38.590 clat (usec): min=40857, max=44547, avg=41140.34, stdev=418.81 00:27:38.590 lat (usec): min=40865, max=44596, avg=41150.44, stdev=419.42 00:27:38.590 clat percentiles (usec): 00:27:38.590 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:38.590 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:38.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:27:38.591 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:27:38.591 | 99.99th=[44303] 00:27:38.591 bw ( KiB/s): min= 384, max= 416, per=33.82%, avg=387.20, stdev= 9.85, samples=20 00:27:38.591 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:27:38.591 lat (msec) : 50=100.00% 00:27:38.591 cpu : usr=94.00%, sys=5.70%, ctx=18, majf=0, minf=86 00:27:38.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.591 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:38.591 filename1: (groupid=0, jobs=1): err= 0: pid=145541: Fri Jul 12 16:04:07 2024 00:27:38.591 read: IOPS=188, BW=756KiB/s (774kB/s)(7568KiB/10011msec) 00:27:38.591 slat (nsec): min=7005, max=85618, avg=9555.69, stdev=4008.36 00:27:38.591 clat (usec): min=780, max=44539, avg=21134.60, stdev=20142.07 00:27:38.591 lat (usec): min=788, max=44560, avg=21144.15, stdev=20141.51 00:27:38.591 clat percentiles (usec): 00:27:38.591 | 1.00th=[ 807], 5.00th=[ 824], 10.00th=[ 832], 20.00th=[ 848], 00:27:38.591 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[41157], 60.00th=[41157], 00:27:38.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:38.591 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:27:38.591 | 99.99th=[44303] 00:27:38.591 bw ( KiB/s): min= 704, max= 768, per=65.98%, avg=755.20, stdev=26.27, samples=20 00:27:38.591 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:27:38.591 lat (usec) : 1000=49.47% 00:27:38.591 lat (msec) : 2=0.21%, 50=50.32% 00:27:38.591 cpu : usr=94.48%, sys=5.22%, ctx=14, majf=0, minf=185 00:27:38.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.591 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:38.591 00:27:38.591 Run status group 0 (all jobs): 00:27:38.591 READ: bw=1144KiB/s (1172kB/s), 389KiB/s-756KiB/s (398kB/s-774kB/s), io=11.2MiB (11.7MB), run=10005-10011msec 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 00:27:38.591 real 0m11.327s 00:27:38.591 user 0m20.122s 00:27:38.591 sys 0m1.393s 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.591 16:04:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 ************************************ 00:27:38.591 END TEST fio_dif_1_multi_subsystems 00:27:38.591 ************************************ 00:27:38.591 16:04:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:38.591 16:04:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:38.591 16:04:08 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:38.591 16:04:08 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.591 16:04:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 ************************************ 00:27:38.591 START TEST fio_dif_rand_params 00:27:38.591 ************************************ 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 bdev_null0 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.591 [2024-07-12 16:04:08.073530] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.591 { 00:27:38.591 "params": { 00:27:38.591 "name": "Nvme$subsystem", 00:27:38.591 "trtype": "$TEST_TRANSPORT", 00:27:38.591 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.591 "adrfam": "ipv4", 00:27:38.591 "trsvcid": "$NVMF_PORT", 00:27:38.591 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.591 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.591 "hdgst": ${hdgst:-false}, 00:27:38.591 "ddgst": ${ddgst:-false} 00:27:38.591 }, 00:27:38.591 "method": "bdev_nvme_attach_controller" 00:27:38.591 } 00:27:38.591 EOF 00:27:38.591 )") 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:38.591 16:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:38.592 "params": { 00:27:38.592 "name": "Nvme0", 00:27:38.592 "trtype": "tcp", 00:27:38.592 "traddr": "10.0.0.2", 00:27:38.592 "adrfam": "ipv4", 00:27:38.592 "trsvcid": "4420", 00:27:38.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.592 "hdgst": false, 00:27:38.592 "ddgst": false 00:27:38.592 }, 00:27:38.592 "method": "bdev_nvme_attach_controller" 00:27:38.592 }' 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:38.592 16:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.849 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:38.849 ... 00:27:38.849 fio-3.35 00:27:38.849 Starting 3 threads 00:27:38.849 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.398 00:27:45.398 filename0: (groupid=0, jobs=1): err= 0: pid=146935: Fri Jul 12 16:04:13 2024 00:27:45.398 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5019msec) 00:27:45.398 slat (nsec): min=4672, max=29022, avg=13678.74, stdev=2392.04 00:27:45.398 clat (usec): min=5116, max=89321, avg=13704.06, stdev=12100.54 00:27:45.398 lat (usec): min=5129, max=89349, avg=13717.74, stdev=12100.56 00:27:45.398 clat percentiles (usec): 00:27:45.398 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 8094], 00:27:45.398 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11076], 00:27:45.398 | 70.00th=[12387], 80.00th=[13566], 90.00th=[15926], 95.00th=[51119], 00:27:45.398 | 99.00th=[54264], 99.50th=[55837], 99.90th=[60556], 99.95th=[89654], 00:27:45.398 | 99.99th=[89654] 00:27:45.398 bw ( KiB/s): min=21504, max=37632, per=34.27%, avg=28010.80, stdev=5449.81, samples=10 00:27:45.398 iops : min= 168, max= 294, avg=218.80, stdev=42.62, samples=10 00:27:45.398 lat (msec) : 10=51.14%, 20=40.20%, 50=2.46%, 100=6.20% 00:27:45.398 cpu : usr=92.73%, sys=6.80%, ctx=11, majf=0, minf=80 00:27:45.398 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.398 issued rwts: total=1097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:45.398 filename0: (groupid=0, jobs=1): err= 0: pid=146936: Fri Jul 12 16:04:13 2024 00:27:45.398 read: IOPS=225, BW=28.2MiB/s (29.6MB/s)(141MiB/5004msec) 00:27:45.398 slat (nsec): min=4852, max=32303, avg=14604.91, stdev=2953.44 00:27:45.398 clat (usec): min=4243, max=58272, avg=13275.14, stdev=11499.08 00:27:45.398 lat (usec): min=4257, max=58283, avg=13289.74, stdev=11499.03 00:27:45.398 clat percentiles (usec): 00:27:45.398 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6456], 20.00th=[ 8029], 00:27:45.398 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[11076], 00:27:45.398 | 70.00th=[12256], 80.00th=[13304], 90.00th=[15139], 95.00th=[50070], 00:27:45.398 | 99.00th=[53740], 99.50th=[54789], 99.90th=[58459], 99.95th=[58459], 00:27:45.398 | 99.99th=[58459] 00:27:45.398 bw ( KiB/s): min=21504, max=39680, per=35.27%, avg=28825.60, stdev=5526.54, samples=10 00:27:45.398 iops : min= 168, max= 310, avg=225.20, stdev=43.18, samples=10 00:27:45.398 lat (msec) : 10=52.79%, 20=39.24%, 50=2.39%, 100=5.58% 00:27:45.398 cpu : usr=89.69%, sys=8.37%, ctx=528, majf=0, minf=117 00:27:45.398 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.398 issued rwts: total=1129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:45.398 filename0: (groupid=0, jobs=1): err= 0: pid=146937: Fri Jul 12 16:04:13 2024 00:27:45.398 read: IOPS=197, BW=24.7MiB/s (25.8MB/s)(124MiB/5045msec) 00:27:45.398 slat (nsec): min=4544, max=35322, avg=14888.02, stdev=3921.93 00:27:45.398 clat (usec): min=4510, max=91160, avg=15149.15, stdev=13964.53 00:27:45.398 lat (usec): min=4524, max=91174, avg=15164.04, stdev=13964.55 00:27:45.398 clat percentiles (usec): 00:27:45.398 | 1.00th=[ 5538], 5.00th=[ 6128], 10.00th=[ 7439], 20.00th=[ 8356], 00:27:45.398 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[10552], 60.00th=[11338], 00:27:45.398 | 70.00th=[12256], 80.00th=[13042], 90.00th=[49021], 95.00th=[51119], 00:27:45.398 | 99.00th=[53740], 99.50th=[55313], 99.90th=[90702], 99.95th=[90702], 00:27:45.398 | 99.99th=[90702] 00:27:45.398 bw ( KiB/s): min=17664, max=34816, per=31.07%, avg=25395.20, stdev=5477.04, samples=10 00:27:45.398 iops : min= 138, max= 272, avg=198.40, stdev=42.79, samples=10 00:27:45.398 lat (msec) : 10=46.33%, 20=41.31%, 50=4.92%, 100=7.44% 00:27:45.398 cpu : usr=88.84%, sys=8.56%, ctx=387, majf=0, minf=78 00:27:45.398 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:45.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.398 issued rwts: total=995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:45.398 00:27:45.398 Run status group 0 (all jobs): 00:27:45.398 READ: bw=79.8MiB/s (83.7MB/s), 24.7MiB/s-28.2MiB/s (25.8MB/s-29.6MB/s), io=403MiB (422MB), run=5004-5045msec 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 bdev_null0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 [2024-07-12 16:04:14.132482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 bdev_null1 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 bdev_null2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.398 { 00:27:45.398 "params": { 00:27:45.398 "name": "Nvme$subsystem", 00:27:45.398 "trtype": "$TEST_TRANSPORT", 00:27:45.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.398 "adrfam": "ipv4", 00:27:45.398 "trsvcid": "$NVMF_PORT", 00:27:45.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.398 "hdgst": ${hdgst:-false}, 00:27:45.398 "ddgst": ${ddgst:-false} 00:27:45.398 }, 00:27:45.398 "method": "bdev_nvme_attach_controller" 00:27:45.398 } 00:27:45.398 EOF 00:27:45.398 )") 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.398 { 00:27:45.398 "params": { 00:27:45.398 "name": "Nvme$subsystem", 00:27:45.398 "trtype": "$TEST_TRANSPORT", 00:27:45.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.398 "adrfam": "ipv4", 00:27:45.398 "trsvcid": "$NVMF_PORT", 00:27:45.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.398 "hdgst": ${hdgst:-false}, 00:27:45.398 "ddgst": ${ddgst:-false} 00:27:45.398 }, 00:27:45.398 "method": "bdev_nvme_attach_controller" 00:27:45.398 } 00:27:45.398 EOF 00:27:45.398 )") 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.398 { 00:27:45.398 "params": { 00:27:45.398 "name": "Nvme$subsystem", 00:27:45.398 "trtype": "$TEST_TRANSPORT", 00:27:45.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.398 "adrfam": "ipv4", 00:27:45.398 "trsvcid": "$NVMF_PORT", 00:27:45.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.398 "hdgst": ${hdgst:-false}, 00:27:45.398 "ddgst": ${ddgst:-false} 00:27:45.398 }, 00:27:45.398 "method": "bdev_nvme_attach_controller" 00:27:45.398 } 00:27:45.398 EOF 00:27:45.398 )") 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:45.398 16:04:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:45.398 "params": { 00:27:45.398 "name": "Nvme0", 00:27:45.399 "trtype": "tcp", 00:27:45.399 "traddr": "10.0.0.2", 00:27:45.399 "adrfam": "ipv4", 00:27:45.399 "trsvcid": "4420", 00:27:45.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.399 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.399 "hdgst": false, 00:27:45.399 "ddgst": false 00:27:45.399 }, 00:27:45.399 "method": "bdev_nvme_attach_controller" 00:27:45.399 },{ 00:27:45.399 "params": { 00:27:45.399 "name": "Nvme1", 00:27:45.399 "trtype": "tcp", 00:27:45.399 "traddr": "10.0.0.2", 00:27:45.399 "adrfam": "ipv4", 00:27:45.399 "trsvcid": "4420", 00:27:45.399 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:45.399 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:45.399 "hdgst": false, 00:27:45.399 "ddgst": false 00:27:45.399 }, 00:27:45.399 "method": "bdev_nvme_attach_controller" 00:27:45.399 },{ 00:27:45.399 "params": { 00:27:45.399 "name": "Nvme2", 00:27:45.399 "trtype": "tcp", 00:27:45.399 "traddr": "10.0.0.2", 00:27:45.399 "adrfam": "ipv4", 00:27:45.399 "trsvcid": "4420", 00:27:45.399 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:45.399 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:45.399 "hdgst": false, 00:27:45.399 "ddgst": false 00:27:45.399 }, 00:27:45.399 "method": "bdev_nvme_attach_controller" 00:27:45.399 }' 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:45.399 16:04:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.399 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:45.399 ... 00:27:45.399 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:45.399 ... 00:27:45.399 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:45.399 ... 00:27:45.399 fio-3.35 00:27:45.399 Starting 24 threads 00:27:45.399 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.590 00:27:57.590 filename0: (groupid=0, jobs=1): err= 0: pid=147799: Fri Jul 12 16:04:25 2024 00:27:57.590 read: IOPS=71, BW=286KiB/s (293kB/s)(2896KiB/10136msec) 00:27:57.590 slat (nsec): min=6669, max=93747, avg=19439.88, stdev=10925.32 00:27:57.590 clat (msec): min=19, max=428, avg=223.59, stdev=48.10 00:27:57.590 lat (msec): min=19, max=428, avg=223.61, stdev=48.10 00:27:57.590 clat percentiles (msec): 00:27:57.590 | 1.00th=[ 20], 5.00th=[ 132], 10.00th=[ 192], 20.00th=[ 203], 00:27:57.590 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 232], 00:27:57.590 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 279], 95.00th=[ 300], 00:27:57.590 | 99.00th=[ 317], 99.50th=[ 326], 99.90th=[ 430], 99.95th=[ 430], 00:27:57.590 | 99.99th=[ 430] 00:27:57.590 bw ( KiB/s): min= 240, max= 496, per=4.90%, avg=283.20, stdev=62.75, samples=20 00:27:57.590 iops : min= 60, max= 124, avg=70.80, stdev=15.69, samples=20 00:27:57.590 lat (msec) : 20=2.21%, 250=83.43%, 500=14.36% 00:27:57.590 cpu : usr=98.29%, sys=1.25%, ctx=11, majf=0, minf=9 00:27:57.590 IO depths : 1=3.3%, 2=8.6%, 4=22.0%, 8=57.0%, 16=9.1%, 32=0.0%, >=64=0.0% 00:27:57.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 issued rwts: total=724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.590 filename0: (groupid=0, jobs=1): err= 0: pid=147800: Fri Jul 12 16:04:25 2024 00:27:57.590 read: IOPS=64, BW=259KiB/s (265kB/s)(2624KiB/10136msec) 00:27:57.590 slat (usec): min=4, max=130, avg=19.65, stdev=13.54 00:27:57.590 clat (msec): min=143, max=448, avg=246.87, stdev=53.25 00:27:57.590 lat (msec): min=143, max=448, avg=246.89, stdev=53.25 00:27:57.590 clat percentiles (msec): 00:27:57.590 | 1.00th=[ 144], 5.00th=[ 184], 10.00th=[ 203], 20.00th=[ 211], 00:27:57.590 | 30.00th=[ 220], 40.00th=[ 224], 50.00th=[ 232], 60.00th=[ 239], 00:27:57.590 | 70.00th=[ 251], 80.00th=[ 288], 90.00th=[ 338], 95.00th=[ 359], 00:27:57.590 | 99.00th=[ 372], 99.50th=[ 414], 99.90th=[ 451], 99.95th=[ 451], 00:27:57.590 | 99.99th=[ 451] 00:27:57.590 bw ( KiB/s): min= 128, max= 336, per=4.43%, avg=256.00, stdev=42.17, samples=20 00:27:57.590 iops : min= 32, max= 84, avg=64.00, stdev=10.54, samples=20 00:27:57.590 lat (msec) : 250=69.66%, 500=30.34% 00:27:57.590 cpu : usr=96.45%, sys=2.31%, ctx=228, majf=0, minf=9 00:27:57.590 IO depths : 1=1.4%, 2=4.9%, 4=16.6%, 8=66.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:27:57.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.590 filename0: (groupid=0, jobs=1): err= 0: pid=147801: Fri Jul 12 16:04:25 2024 00:27:57.590 read: IOPS=70, BW=280KiB/s (287kB/s)(2840KiB/10135msec) 00:27:57.590 slat (nsec): min=4130, max=82280, avg=16822.89, stdev=10551.29 00:27:57.590 clat (msec): min=18, max=392, avg=227.36, stdev=51.54 00:27:57.590 lat (msec): min=18, max=392, avg=227.37, stdev=51.54 00:27:57.590 clat percentiles (msec): 00:27:57.590 | 1.00th=[ 19], 5.00th=[ 171], 10.00th=[ 192], 20.00th=[ 207], 00:27:57.590 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 232], 00:27:57.590 | 70.00th=[ 236], 80.00th=[ 243], 90.00th=[ 317], 95.00th=[ 326], 00:27:57.590 | 99.00th=[ 330], 99.50th=[ 372], 99.90th=[ 393], 99.95th=[ 393], 00:27:57.590 | 99.99th=[ 393] 00:27:57.590 bw ( KiB/s): min= 256, max= 384, per=4.80%, avg=277.60, stdev=37.53, samples=20 00:27:57.590 iops : min= 64, max= 96, avg=69.40, stdev= 9.38, samples=20 00:27:57.590 lat (msec) : 20=2.25%, 250=81.97%, 500=15.77% 00:27:57.590 cpu : usr=96.99%, sys=2.02%, ctx=63, majf=0, minf=9 00:27:57.590 IO depths : 1=1.5%, 2=4.4%, 4=14.6%, 8=68.5%, 16=11.0%, 32=0.0%, >=64=0.0% 00:27:57.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 complete : 0=0.0%, 4=91.1%, 8=3.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.590 filename0: (groupid=0, jobs=1): err= 0: pid=147802: Fri Jul 12 16:04:25 2024 00:27:57.590 read: IOPS=70, BW=280KiB/s (287kB/s)(2840KiB/10136msec) 00:27:57.590 slat (usec): min=5, max=156, avg=21.02, stdev=15.05 00:27:57.590 clat (msec): min=18, max=432, avg=226.96, stdev=51.23 00:27:57.590 lat (msec): min=18, max=432, avg=226.98, stdev=51.23 00:27:57.590 clat percentiles (msec): 00:27:57.590 | 1.00th=[ 20], 5.00th=[ 133], 10.00th=[ 197], 20.00th=[ 209], 00:27:57.590 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 228], 60.00th=[ 232], 00:27:57.590 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 296], 95.00th=[ 313], 00:27:57.590 | 99.00th=[ 334], 99.50th=[ 388], 99.90th=[ 435], 99.95th=[ 435], 00:27:57.590 | 99.99th=[ 435] 00:27:57.590 bw ( KiB/s): min= 240, max= 384, per=4.80%, avg=277.60, stdev=47.37, samples=20 00:27:57.590 iops : min= 60, max= 96, avg=69.40, stdev=11.84, samples=20 00:27:57.590 lat (msec) : 20=2.25%, 250=81.41%, 500=16.34% 00:27:57.590 cpu : usr=97.16%, sys=1.82%, ctx=117, majf=0, minf=9 00:27:57.590 IO depths : 1=3.0%, 2=8.6%, 4=23.1%, 8=55.8%, 16=9.6%, 32=0.0%, >=64=0.0% 00:27:57.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.590 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.590 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.590 filename0: (groupid=0, jobs=1): err= 0: pid=147803: Fri Jul 12 16:04:25 2024 00:27:57.590 read: IOPS=50, BW=203KiB/s (207kB/s)(2048KiB/10113msec) 00:27:57.590 slat (nsec): min=4147, max=49094, avg=20065.50, stdev=8647.64 00:27:57.590 clat (msec): min=196, max=495, avg=315.83, stdev=66.00 00:27:57.590 lat (msec): min=196, max=495, avg=315.85, stdev=66.00 00:27:57.590 clat percentiles (msec): 00:27:57.590 | 1.00th=[ 197], 5.00th=[ 203], 10.00th=[ 211], 20.00th=[ 230], 00:27:57.590 | 30.00th=[ 309], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 330], 00:27:57.590 | 70.00th=[ 347], 80.00th=[ 368], 90.00th=[ 376], 95.00th=[ 405], 00:27:57.590 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 498], 00:27:57.590 | 99.99th=[ 498] 00:27:57.590 bw ( KiB/s): min= 128, max= 272, per=3.43%, avg=198.40, stdev=64.08, samples=20 00:27:57.590 iops : min= 32, max= 68, avg=49.60, stdev=16.02, samples=20 00:27:57.590 lat (msec) : 250=20.70%, 500=79.30% 00:27:57.590 cpu : usr=98.11%, sys=1.46%, ctx=15, majf=0, minf=9 00:27:57.590 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:27:57.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename0: (groupid=0, jobs=1): err= 0: pid=147804: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=50, BW=203KiB/s (207kB/s)(2048KiB/10113msec) 00:27:57.591 slat (usec): min=4, max=141, avg=25.20, stdev=13.37 00:27:57.591 clat (msec): min=135, max=498, avg=315.79, stdev=66.69 00:27:57.591 lat (msec): min=135, max=498, avg=315.81, stdev=66.69 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 188], 5.00th=[ 203], 10.00th=[ 209], 20.00th=[ 279], 00:27:57.591 | 30.00th=[ 296], 40.00th=[ 305], 50.00th=[ 317], 60.00th=[ 334], 00:27:57.591 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 393], 95.00th=[ 397], 00:27:57.591 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 498], 00:27:57.591 | 99.99th=[ 498] 00:27:57.591 bw ( KiB/s): min= 128, max= 272, per=3.43%, avg=198.40, stdev=65.54, samples=20 00:27:57.591 iops : min= 32, max= 68, avg=49.60, stdev=16.38, samples=20 00:27:57.591 lat (msec) : 250=19.92%, 500=80.08% 00:27:57.591 cpu : usr=97.51%, sys=1.48%, ctx=50, majf=0, minf=9 00:27:57.591 IO depths : 1=4.9%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:27:57.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename0: (groupid=0, jobs=1): err= 0: pid=147805: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=75, BW=303KiB/s (310kB/s)(3072KiB/10137msec) 00:27:57.591 slat (usec): min=4, max=101, avg=22.18, stdev=11.92 00:27:57.591 clat (msec): min=4, max=345, avg=210.98, stdev=51.03 00:27:57.591 lat (msec): min=4, max=345, avg=211.00, stdev=51.03 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 5], 5.00th=[ 128], 10.00th=[ 174], 20.00th=[ 201], 00:27:57.591 | 30.00th=[ 205], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 224], 00:27:57.591 | 70.00th=[ 232], 80.00th=[ 245], 90.00th=[ 249], 95.00th=[ 253], 00:27:57.591 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 347], 99.95th=[ 347], 00:27:57.591 | 99.99th=[ 347] 00:27:57.591 bw ( KiB/s): min= 240, max= 640, per=5.20%, avg=300.80, stdev=95.52, samples=20 00:27:57.591 iops : min= 60, max= 160, avg=75.20, stdev=23.88, samples=20 00:27:57.591 lat (msec) : 10=2.08%, 20=2.08%, 250=89.58%, 500=6.25% 00:27:57.591 cpu : usr=98.22%, sys=1.35%, ctx=22, majf=0, minf=9 00:27:57.591 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:57.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename0: (groupid=0, jobs=1): err= 0: pid=147806: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=69, BW=277KiB/s (284kB/s)(2808KiB/10135msec) 00:27:57.591 slat (usec): min=4, max=735, avg=20.09, stdev=30.26 00:27:57.591 clat (msec): min=17, max=332, avg=230.48, stdev=45.64 00:27:57.591 lat (msec): min=17, max=332, avg=230.50, stdev=45.64 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 18], 5.00th=[ 194], 10.00th=[ 199], 20.00th=[ 207], 00:27:57.591 | 30.00th=[ 218], 40.00th=[ 220], 50.00th=[ 228], 60.00th=[ 234], 00:27:57.591 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 288], 95.00th=[ 309], 00:27:57.591 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:27:57.591 | 99.99th=[ 334] 00:27:57.591 bw ( KiB/s): min= 256, max= 368, per=4.75%, avg=274.40, stdev=40.63, samples=20 00:27:57.591 iops : min= 64, max= 92, avg=68.60, stdev=10.16, samples=20 00:27:57.591 lat (msec) : 20=2.28%, 250=77.64%, 500=20.09% 00:27:57.591 cpu : usr=97.59%, sys=1.60%, ctx=41, majf=0, minf=9 00:27:57.591 IO depths : 1=1.3%, 2=7.5%, 4=25.1%, 8=55.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:27:57.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename1: (groupid=0, jobs=1): err= 0: pid=147807: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=71, BW=286KiB/s (293kB/s)(2904KiB/10137msec) 00:27:57.591 slat (nsec): min=8109, max=84955, avg=20072.46, stdev=10073.11 00:27:57.591 clat (msec): min=18, max=357, avg=221.83, stdev=45.82 00:27:57.591 lat (msec): min=18, max=357, avg=221.85, stdev=45.82 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 20], 5.00th=[ 133], 10.00th=[ 197], 20.00th=[ 207], 00:27:57.591 | 30.00th=[ 213], 40.00th=[ 218], 50.00th=[ 224], 60.00th=[ 230], 00:27:57.591 | 70.00th=[ 236], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 309], 00:27:57.591 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 359], 99.95th=[ 359], 00:27:57.591 | 99.99th=[ 359] 00:27:57.591 bw ( KiB/s): min= 240, max= 384, per=4.90%, avg=283.95, stdev=46.57, samples=20 00:27:57.591 iops : min= 60, max= 96, avg=70.95, stdev=11.56, samples=20 00:27:57.591 lat (msec) : 20=2.20%, 250=85.67%, 500=12.12% 00:27:57.591 cpu : usr=97.78%, sys=1.54%, ctx=22, majf=0, minf=9 00:27:57.591 IO depths : 1=2.1%, 2=6.2%, 4=18.9%, 8=62.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:57.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename1: (groupid=0, jobs=1): err= 0: pid=147808: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=71, BW=285KiB/s (292kB/s)(2888KiB/10135msec) 00:27:57.591 slat (usec): min=4, max=152, avg=17.65, stdev=11.83 00:27:57.591 clat (msec): min=18, max=366, avg=223.19, stdev=47.92 00:27:57.591 lat (msec): min=18, max=366, avg=223.21, stdev=47.92 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 19], 5.00th=[ 161], 10.00th=[ 186], 20.00th=[ 209], 00:27:57.591 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 224], 60.00th=[ 228], 00:27:57.591 | 70.00th=[ 234], 80.00th=[ 243], 90.00th=[ 279], 95.00th=[ 313], 00:27:57.591 | 99.00th=[ 355], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:27:57.591 | 99.99th=[ 368] 00:27:57.591 bw ( KiB/s): min= 192, max= 384, per=4.89%, avg=282.40, stdev=54.51, samples=20 00:27:57.591 iops : min= 48, max= 96, avg=70.60, stdev=13.63, samples=20 00:27:57.591 lat (msec) : 20=2.22%, 250=84.49%, 500=13.30% 00:27:57.591 cpu : usr=97.63%, sys=1.57%, ctx=60, majf=0, minf=9 00:27:57.591 IO depths : 1=1.0%, 2=3.6%, 4=14.0%, 8=69.8%, 16=11.6%, 32=0.0%, >=64=0.0% 00:27:57.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=91.0%, 8=3.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename1: (groupid=0, jobs=1): err= 0: pid=147809: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=75, BW=301KiB/s (309kB/s)(3056KiB/10142msec) 00:27:57.591 slat (nsec): min=5923, max=82285, avg=13233.33, stdev=9075.14 00:27:57.591 clat (msec): min=9, max=387, avg=211.97, stdev=57.56 00:27:57.591 lat (msec): min=9, max=387, avg=211.98, stdev=57.55 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 11], 5.00th=[ 127], 10.00th=[ 136], 20.00th=[ 199], 00:27:57.591 | 30.00th=[ 211], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 226], 00:27:57.591 | 70.00th=[ 230], 80.00th=[ 236], 90.00th=[ 253], 95.00th=[ 275], 00:27:57.591 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:27:57.591 | 99.99th=[ 388] 00:27:57.591 bw ( KiB/s): min= 208, max= 640, per=5.18%, avg=299.20, stdev=88.11, samples=20 00:27:57.591 iops : min= 52, max= 160, avg=74.80, stdev=22.03, samples=20 00:27:57.591 lat (msec) : 10=0.52%, 20=3.66%, 100=0.79%, 250=84.03%, 500=10.99% 00:27:57.591 cpu : usr=97.71%, sys=1.59%, ctx=65, majf=0, minf=9 00:27:57.591 IO depths : 1=1.2%, 2=3.5%, 4=13.1%, 8=70.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:27:57.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=90.7%, 8=3.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename1: (groupid=0, jobs=1): err= 0: pid=147810: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=47, BW=190KiB/s (195kB/s)(1920KiB/10108msec) 00:27:57.591 slat (usec): min=9, max=114, avg=26.20, stdev=10.92 00:27:57.591 clat (msec): min=201, max=616, avg=336.58, stdev=48.43 00:27:57.591 lat (msec): min=201, max=616, avg=336.60, stdev=48.43 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 245], 5.00th=[ 288], 10.00th=[ 288], 20.00th=[ 305], 00:27:57.591 | 30.00th=[ 309], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 338], 00:27:57.591 | 70.00th=[ 355], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 397], 00:27:57.591 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 617], 99.95th=[ 617], 00:27:57.591 | 99.99th=[ 617] 00:27:57.591 bw ( KiB/s): min= 112, max= 256, per=3.20%, avg=185.60, stdev=65.54, samples=20 00:27:57.591 iops : min= 28, max= 64, avg=46.40, stdev=16.38, samples=20 00:27:57.591 lat (msec) : 250=1.25%, 500=98.33%, 750=0.42% 00:27:57.591 cpu : usr=96.29%, sys=2.19%, ctx=115, majf=0, minf=9 00:27:57.591 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:27:57.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.591 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.591 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.591 filename1: (groupid=0, jobs=1): err= 0: pid=147811: Fri Jul 12 16:04:25 2024 00:27:57.591 read: IOPS=64, BW=258KiB/s (265kB/s)(2616KiB/10127msec) 00:27:57.591 slat (usec): min=8, max=103, avg=27.28, stdev=23.42 00:27:57.591 clat (msec): min=137, max=433, avg=247.18, stdev=48.91 00:27:57.591 lat (msec): min=137, max=433, avg=247.21, stdev=48.92 00:27:57.591 clat percentiles (msec): 00:27:57.591 | 1.00th=[ 138], 5.00th=[ 197], 10.00th=[ 205], 20.00th=[ 218], 00:27:57.591 | 30.00th=[ 220], 40.00th=[ 226], 50.00th=[ 232], 60.00th=[ 245], 00:27:57.591 | 70.00th=[ 253], 80.00th=[ 288], 90.00th=[ 317], 95.00th=[ 338], 00:27:57.591 | 99.00th=[ 368], 99.50th=[ 414], 99.90th=[ 435], 99.95th=[ 435], 00:27:57.591 | 99.99th=[ 435] 00:27:57.591 bw ( KiB/s): min= 128, max= 384, per=4.42%, avg=255.20, stdev=66.37, samples=20 00:27:57.592 iops : min= 32, max= 96, avg=63.80, stdev=16.59, samples=20 00:27:57.592 lat (msec) : 250=66.97%, 500=33.03% 00:27:57.592 cpu : usr=98.55%, sys=1.01%, ctx=14, majf=0, minf=9 00:27:57.592 IO depths : 1=1.8%, 2=8.1%, 4=25.1%, 8=54.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=654,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename1: (groupid=0, jobs=1): err= 0: pid=147812: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=56, BW=228KiB/s (233kB/s)(2304KiB/10111msec) 00:27:57.592 slat (nsec): min=6917, max=52086, avg=22823.38, stdev=9212.89 00:27:57.592 clat (msec): min=201, max=493, avg=280.65, stdev=66.16 00:27:57.592 lat (msec): min=201, max=493, avg=280.68, stdev=66.16 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 203], 5.00th=[ 203], 10.00th=[ 209], 20.00th=[ 222], 00:27:57.592 | 30.00th=[ 232], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 305], 00:27:57.592 | 70.00th=[ 313], 80.00th=[ 334], 90.00th=[ 363], 95.00th=[ 380], 00:27:57.592 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:27:57.592 | 99.99th=[ 493] 00:27:57.592 bw ( KiB/s): min= 128, max= 272, per=3.86%, avg=224.00, stdev=53.70, samples=20 00:27:57.592 iops : min= 32, max= 68, avg=56.00, stdev=13.42, samples=20 00:27:57.592 lat (msec) : 250=46.88%, 500=53.12% 00:27:57.592 cpu : usr=98.22%, sys=1.32%, ctx=12, majf=0, minf=9 00:27:57.592 IO depths : 1=2.6%, 2=8.9%, 4=25.0%, 8=53.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename1: (groupid=0, jobs=1): err= 0: pid=147813: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=47, BW=190KiB/s (195kB/s)(1920KiB/10102msec) 00:27:57.592 slat (nsec): min=7089, max=53408, avg=20245.14, stdev=9566.78 00:27:57.592 clat (msec): min=217, max=488, avg=336.54, stdev=44.61 00:27:57.592 lat (msec): min=217, max=488, avg=336.56, stdev=44.61 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 279], 5.00th=[ 288], 10.00th=[ 288], 20.00th=[ 305], 00:27:57.592 | 30.00th=[ 309], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 338], 00:27:57.592 | 70.00th=[ 355], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 397], 00:27:57.592 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 489], 99.95th=[ 489], 00:27:57.592 | 99.99th=[ 489] 00:27:57.592 bw ( KiB/s): min= 128, max= 256, per=3.20%, avg=185.60, stdev=65.33, samples=20 00:27:57.592 iops : min= 32, max= 64, avg=46.40, stdev=16.33, samples=20 00:27:57.592 lat (msec) : 250=0.83%, 500=99.17% 00:27:57.592 cpu : usr=98.51%, sys=1.07%, ctx=13, majf=0, minf=9 00:27:57.592 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename1: (groupid=0, jobs=1): err= 0: pid=147814: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=47, BW=190KiB/s (195kB/s)(1920KiB/10102msec) 00:27:57.592 slat (nsec): min=7318, max=41745, avg=17014.20, stdev=7953.58 00:27:57.592 clat (msec): min=135, max=836, avg=336.55, stdev=86.31 00:27:57.592 lat (msec): min=135, max=836, avg=336.57, stdev=86.31 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 203], 5.00th=[ 211], 10.00th=[ 288], 20.00th=[ 300], 00:27:57.592 | 30.00th=[ 313], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 338], 00:27:57.592 | 70.00th=[ 359], 80.00th=[ 372], 90.00th=[ 376], 95.00th=[ 451], 00:27:57.592 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 835], 99.95th=[ 835], 00:27:57.592 | 99.99th=[ 835] 00:27:57.592 bw ( KiB/s): min= 128, max= 256, per=3.38%, avg=195.37, stdev=62.56, samples=19 00:27:57.592 iops : min= 32, max= 64, avg=48.84, stdev=15.64, samples=19 00:27:57.592 lat (msec) : 250=9.17%, 500=87.50%, 750=2.92%, 1000=0.42% 00:27:57.592 cpu : usr=98.42%, sys=1.16%, ctx=12, majf=0, minf=9 00:27:57.592 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename2: (groupid=0, jobs=1): err= 0: pid=147815: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=45, BW=184KiB/s (188kB/s)(1856KiB/10098msec) 00:27:57.592 slat (nsec): min=6738, max=48689, avg=18892.50, stdev=8293.82 00:27:57.592 clat (msec): min=245, max=694, avg=346.21, stdev=72.34 00:27:57.592 lat (msec): min=245, max=694, avg=346.22, stdev=72.34 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 288], 5.00th=[ 292], 10.00th=[ 296], 20.00th=[ 309], 00:27:57.592 | 30.00th=[ 317], 40.00th=[ 321], 50.00th=[ 330], 60.00th=[ 342], 00:27:57.592 | 70.00th=[ 355], 80.00th=[ 368], 90.00th=[ 380], 95.00th=[ 393], 00:27:57.592 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 693], 99.95th=[ 693], 00:27:57.592 | 99.99th=[ 693] 00:27:57.592 bw ( KiB/s): min= 128, max= 256, per=3.26%, avg=188.63, stdev=65.66, samples=19 00:27:57.592 iops : min= 32, max= 64, avg=47.16, stdev=16.42, samples=19 00:27:57.592 lat (msec) : 250=0.43%, 500=96.12%, 750=3.45% 00:27:57.592 cpu : usr=98.46%, sys=1.10%, ctx=20, majf=0, minf=11 00:27:57.592 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename2: (groupid=0, jobs=1): err= 0: pid=147816: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=47, BW=190KiB/s (195kB/s)(1920KiB/10098msec) 00:27:57.592 slat (nsec): min=8821, max=48483, avg=18751.81, stdev=8574.25 00:27:57.592 clat (msec): min=187, max=485, avg=336.43, stdev=51.69 00:27:57.592 lat (msec): min=187, max=485, avg=336.45, stdev=51.69 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 213], 5.00th=[ 279], 10.00th=[ 288], 20.00th=[ 305], 00:27:57.592 | 30.00th=[ 309], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 347], 00:27:57.592 | 70.00th=[ 355], 80.00th=[ 368], 90.00th=[ 384], 95.00th=[ 460], 00:27:57.592 | 99.00th=[ 485], 99.50th=[ 485], 99.90th=[ 485], 99.95th=[ 485], 00:27:57.592 | 99.99th=[ 485] 00:27:57.592 bw ( KiB/s): min= 128, max= 256, per=3.20%, avg=185.60, stdev=62.38, samples=20 00:27:57.592 iops : min= 32, max= 64, avg=46.40, stdev=15.59, samples=20 00:27:57.592 lat (msec) : 250=3.33%, 500=96.67% 00:27:57.592 cpu : usr=98.37%, sys=1.21%, ctx=16, majf=0, minf=9 00:27:57.592 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename2: (groupid=0, jobs=1): err= 0: pid=147817: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=45, BW=184KiB/s (188kB/s)(1856KiB/10101msec) 00:27:57.592 slat (nsec): min=6622, max=50849, avg=18188.51, stdev=8460.32 00:27:57.592 clat (msec): min=210, max=854, avg=346.27, stdev=82.40 00:27:57.592 lat (msec): min=210, max=854, avg=346.29, stdev=82.40 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 224], 5.00th=[ 247], 10.00th=[ 292], 20.00th=[ 309], 00:27:57.592 | 30.00th=[ 317], 40.00th=[ 317], 50.00th=[ 330], 60.00th=[ 342], 00:27:57.592 | 70.00th=[ 359], 80.00th=[ 368], 90.00th=[ 393], 95.00th=[ 447], 00:27:57.592 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 852], 99.95th=[ 852], 00:27:57.592 | 99.99th=[ 852] 00:27:57.592 bw ( KiB/s): min= 128, max= 256, per=3.26%, avg=188.63, stdev=62.56, samples=19 00:27:57.592 iops : min= 32, max= 64, avg=47.16, stdev=15.64, samples=19 00:27:57.592 lat (msec) : 250=5.17%, 500=91.38%, 750=3.02%, 1000=0.43% 00:27:57.592 cpu : usr=98.35%, sys=1.23%, ctx=14, majf=0, minf=9 00:27:57.592 IO depths : 1=3.2%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename2: (groupid=0, jobs=1): err= 0: pid=147818: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=58, BW=234KiB/s (239kB/s)(2368KiB/10136msec) 00:27:57.592 slat (usec): min=8, max=153, avg=25.10, stdev=15.55 00:27:57.592 clat (msec): min=119, max=430, avg=273.72, stdev=57.87 00:27:57.592 lat (msec): min=119, max=430, avg=273.75, stdev=57.88 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 144], 5.00th=[ 186], 10.00th=[ 203], 20.00th=[ 220], 00:27:57.592 | 30.00th=[ 236], 40.00th=[ 247], 50.00th=[ 288], 60.00th=[ 305], 00:27:57.592 | 70.00th=[ 313], 80.00th=[ 317], 90.00th=[ 355], 95.00th=[ 368], 00:27:57.592 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 430], 99.95th=[ 430], 00:27:57.592 | 99.99th=[ 430] 00:27:57.592 bw ( KiB/s): min= 128, max= 256, per=3.98%, avg=230.40, stdev=52.53, samples=20 00:27:57.592 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:27:57.592 lat (msec) : 250=47.97%, 500=52.03% 00:27:57.592 cpu : usr=97.30%, sys=1.71%, ctx=156, majf=0, minf=9 00:27:57.592 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:27:57.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.592 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.592 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.592 filename2: (groupid=0, jobs=1): err= 0: pid=147819: Fri Jul 12 16:04:25 2024 00:27:57.592 read: IOPS=52, BW=211KiB/s (217kB/s)(2136KiB/10102msec) 00:27:57.592 slat (nsec): min=6521, max=48444, avg=19084.51, stdev=9140.74 00:27:57.592 clat (msec): min=145, max=487, avg=302.50, stdev=71.41 00:27:57.592 lat (msec): min=145, max=487, avg=302.52, stdev=71.41 00:27:57.592 clat percentiles (msec): 00:27:57.592 | 1.00th=[ 146], 5.00th=[ 201], 10.00th=[ 213], 20.00th=[ 226], 00:27:57.592 | 30.00th=[ 249], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 317], 00:27:57.592 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 384], 95.00th=[ 422], 00:27:57.592 | 99.00th=[ 489], 99.50th=[ 489], 99.90th=[ 489], 99.95th=[ 489], 00:27:57.592 | 99.99th=[ 489] 00:27:57.593 bw ( KiB/s): min= 128, max= 304, per=3.59%, avg=207.20, stdev=64.31, samples=20 00:27:57.593 iops : min= 32, max= 76, avg=51.80, stdev=16.08, samples=20 00:27:57.593 lat (msec) : 250=32.58%, 500=67.42% 00:27:57.593 cpu : usr=98.42%, sys=1.17%, ctx=15, majf=0, minf=9 00:27:57.593 IO depths : 1=2.4%, 2=7.7%, 4=21.9%, 8=57.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:27:57.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 issued rwts: total=534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.593 filename2: (groupid=0, jobs=1): err= 0: pid=147820: Fri Jul 12 16:04:25 2024 00:27:57.593 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10047msec) 00:27:57.593 slat (nsec): min=7436, max=58247, avg=23147.76, stdev=9009.32 00:27:57.593 clat (msec): min=186, max=593, avg=304.24, stdev=66.87 00:27:57.593 lat (msec): min=186, max=593, avg=304.27, stdev=66.87 00:27:57.593 clat percentiles (msec): 00:27:57.593 | 1.00th=[ 188], 5.00th=[ 203], 10.00th=[ 224], 20.00th=[ 232], 00:27:57.593 | 30.00th=[ 251], 40.00th=[ 300], 50.00th=[ 309], 60.00th=[ 317], 00:27:57.593 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 372], 95.00th=[ 393], 00:27:57.593 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 592], 99.95th=[ 592], 00:27:57.593 | 99.99th=[ 592] 00:27:57.593 bw ( KiB/s): min= 112, max= 256, per=3.53%, avg=204.80, stdev=64.55, samples=20 00:27:57.593 iops : min= 28, max= 64, avg=51.20, stdev=16.14, samples=20 00:27:57.593 lat (msec) : 250=29.92%, 500=69.70%, 750=0.38% 00:27:57.593 cpu : usr=98.34%, sys=1.25%, ctx=14, majf=0, minf=9 00:27:57.593 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:27:57.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.593 filename2: (groupid=0, jobs=1): err= 0: pid=147821: Fri Jul 12 16:04:25 2024 00:27:57.593 read: IOPS=74, BW=297KiB/s (304kB/s)(3008KiB/10135msec) 00:27:57.593 slat (nsec): min=8839, max=90201, avg=21926.37, stdev=10210.66 00:27:57.593 clat (msec): min=18, max=315, avg=215.45, stdev=42.90 00:27:57.593 lat (msec): min=18, max=315, avg=215.47, stdev=42.90 00:27:57.593 clat percentiles (msec): 00:27:57.593 | 1.00th=[ 20], 5.00th=[ 133], 10.00th=[ 176], 20.00th=[ 197], 00:27:57.593 | 30.00th=[ 209], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 228], 00:27:57.593 | 70.00th=[ 234], 80.00th=[ 247], 90.00th=[ 249], 95.00th=[ 255], 00:27:57.593 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 317], 99.95th=[ 317], 00:27:57.593 | 99.99th=[ 317] 00:27:57.593 bw ( KiB/s): min= 256, max= 496, per=5.09%, avg=294.40, stdev=70.30, samples=20 00:27:57.593 iops : min= 64, max= 124, avg=73.60, stdev=17.58, samples=20 00:27:57.593 lat (msec) : 20=2.13%, 250=89.63%, 500=8.24% 00:27:57.593 cpu : usr=98.25%, sys=1.31%, ctx=44, majf=0, minf=9 00:27:57.593 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:57.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.593 filename2: (groupid=0, jobs=1): err= 0: pid=147822: Fri Jul 12 16:04:25 2024 00:27:57.593 read: IOPS=63, BW=255KiB/s (261kB/s)(2584KiB/10132msec) 00:27:57.593 slat (usec): min=8, max=103, avg=28.29, stdev=21.92 00:27:57.593 clat (msec): min=143, max=398, avg=249.12, stdev=48.54 00:27:57.593 lat (msec): min=143, max=398, avg=249.15, stdev=48.55 00:27:57.593 clat percentiles (msec): 00:27:57.593 | 1.00th=[ 144], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 215], 00:27:57.593 | 30.00th=[ 220], 40.00th=[ 230], 50.00th=[ 243], 60.00th=[ 247], 00:27:57.593 | 70.00th=[ 253], 80.00th=[ 313], 90.00th=[ 317], 95.00th=[ 330], 00:27:57.593 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 401], 99.95th=[ 401], 00:27:57.593 | 99.99th=[ 401] 00:27:57.593 bw ( KiB/s): min= 144, max= 304, per=4.37%, avg=252.00, stdev=28.84, samples=20 00:27:57.593 iops : min= 36, max= 76, avg=63.00, stdev= 7.21, samples=20 00:27:57.593 lat (msec) : 250=69.66%, 500=30.34% 00:27:57.593 cpu : usr=98.28%, sys=1.29%, ctx=13, majf=0, minf=9 00:27:57.593 IO depths : 1=2.3%, 2=7.6%, 4=22.0%, 8=57.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:27:57.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 complete : 0=0.0%, 4=93.3%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.593 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.593 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:57.593 00:27:57.593 Run status group 0 (all jobs): 00:27:57.593 READ: bw=5772KiB/s (5911kB/s), 184KiB/s-303KiB/s (188kB/s-310kB/s), io=57.2MiB (59.9MB), run=10047-10142msec 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 bdev_null0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 [2024-07-12 16:04:25.947811] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.593 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.593 bdev_null1 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.594 { 00:27:57.594 "params": { 00:27:57.594 "name": "Nvme$subsystem", 00:27:57.594 "trtype": "$TEST_TRANSPORT", 00:27:57.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.594 "adrfam": "ipv4", 00:27:57.594 "trsvcid": "$NVMF_PORT", 00:27:57.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.594 "hdgst": ${hdgst:-false}, 00:27:57.594 "ddgst": ${ddgst:-false} 00:27:57.594 }, 00:27:57.594 "method": "bdev_nvme_attach_controller" 00:27:57.594 } 00:27:57.594 EOF 00:27:57.594 )") 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:57.594 { 00:27:57.594 "params": { 00:27:57.594 "name": "Nvme$subsystem", 00:27:57.594 "trtype": "$TEST_TRANSPORT", 00:27:57.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.594 "adrfam": "ipv4", 00:27:57.594 "trsvcid": "$NVMF_PORT", 00:27:57.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.594 "hdgst": ${hdgst:-false}, 00:27:57.594 "ddgst": ${ddgst:-false} 00:27:57.594 }, 00:27:57.594 "method": "bdev_nvme_attach_controller" 00:27:57.594 } 00:27:57.594 EOF 00:27:57.594 )") 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:57.594 16:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:57.594 "params": { 00:27:57.594 "name": "Nvme0", 00:27:57.594 "trtype": "tcp", 00:27:57.594 "traddr": "10.0.0.2", 00:27:57.594 "adrfam": "ipv4", 00:27:57.594 "trsvcid": "4420", 00:27:57.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.594 "hdgst": false, 00:27:57.594 "ddgst": false 00:27:57.594 }, 00:27:57.594 "method": "bdev_nvme_attach_controller" 00:27:57.594 },{ 00:27:57.594 "params": { 00:27:57.594 "name": "Nvme1", 00:27:57.594 "trtype": "tcp", 00:27:57.594 "traddr": "10.0.0.2", 00:27:57.594 "adrfam": "ipv4", 00:27:57.594 "trsvcid": "4420", 00:27:57.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:57.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:57.594 "hdgst": false, 00:27:57.594 "ddgst": false 00:27:57.594 }, 00:27:57.594 "method": "bdev_nvme_attach_controller" 00:27:57.594 }' 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:57.594 16:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.594 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:57.594 ... 00:27:57.594 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:57.594 ... 00:27:57.594 fio-3.35 00:27:57.594 Starting 4 threads 00:27:57.594 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.862 00:28:02.862 filename0: (groupid=0, jobs=1): err= 0: pid=149085: Fri Jul 12 16:04:31 2024 00:28:02.862 read: IOPS=1882, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5003msec) 00:28:02.862 slat (nsec): min=5252, max=57772, avg=12305.10, stdev=5989.43 00:28:02.862 clat (usec): min=1013, max=7697, avg=4210.45, stdev=754.68 00:28:02.862 lat (usec): min=1026, max=7711, avg=4222.76, stdev=754.05 00:28:02.862 clat percentiles (usec): 00:28:02.862 | 1.00th=[ 2900], 5.00th=[ 3425], 10.00th=[ 3589], 20.00th=[ 3720], 00:28:02.862 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4080], 00:28:02.862 | 70.00th=[ 4178], 80.00th=[ 4424], 90.00th=[ 5800], 95.00th=[ 5866], 00:28:02.862 | 99.00th=[ 6128], 99.50th=[ 6390], 99.90th=[ 6980], 99.95th=[ 7308], 00:28:02.862 | 99.99th=[ 7701] 00:28:02.862 bw ( KiB/s): min=14704, max=15632, per=25.23%, avg=15067.20, stdev=279.89, samples=10 00:28:02.862 iops : min= 1838, max= 1954, avg=1883.40, stdev=34.99, samples=10 00:28:02.862 lat (msec) : 2=0.13%, 4=50.32%, 10=49.55% 00:28:02.862 cpu : usr=94.46%, sys=5.06%, ctx=21, majf=0, minf=9 00:28:02.862 IO depths : 1=0.1%, 2=2.2%, 4=69.9%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 issued rwts: total=9420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.862 filename0: (groupid=0, jobs=1): err= 0: pid=149086: Fri Jul 12 16:04:31 2024 00:28:02.862 read: IOPS=1847, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5002msec) 00:28:02.862 slat (nsec): min=6879, max=57072, avg=11965.26, stdev=5708.92 00:28:02.862 clat (usec): min=1652, max=7752, avg=4294.23, stdev=758.07 00:28:02.862 lat (usec): min=1659, max=7766, avg=4306.19, stdev=757.45 00:28:02.862 clat percentiles (usec): 00:28:02.862 | 1.00th=[ 3195], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3785], 00:28:02.862 | 30.00th=[ 3884], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4146], 00:28:02.862 | 70.00th=[ 4293], 80.00th=[ 4621], 90.00th=[ 5800], 95.00th=[ 5932], 00:28:02.862 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7439], 99.95th=[ 7504], 00:28:02.862 | 99.99th=[ 7767] 00:28:02.862 bw ( KiB/s): min=14016, max=14992, per=24.72%, avg=14759.11, stdev=317.70, samples=9 00:28:02.862 iops : min= 1752, max= 1874, avg=1844.89, stdev=39.71, samples=9 00:28:02.862 lat (msec) : 2=0.05%, 4=46.39%, 10=53.56% 00:28:02.862 cpu : usr=94.72%, sys=4.80%, ctx=7, majf=0, minf=0 00:28:02.862 IO depths : 1=0.1%, 2=1.7%, 4=70.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 issued rwts: total=9239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.862 filename1: (groupid=0, jobs=1): err= 0: pid=149087: Fri Jul 12 16:04:31 2024 00:28:02.862 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5001msec) 00:28:02.862 slat (nsec): min=6721, max=62816, avg=16444.54, stdev=8412.84 00:28:02.862 clat (usec): min=824, max=7665, avg=4269.58, stdev=752.64 00:28:02.862 lat (usec): min=874, max=7679, avg=4286.03, stdev=750.55 00:28:02.862 clat percentiles (usec): 00:28:02.862 | 1.00th=[ 3163], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3785], 00:28:02.862 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4080], 00:28:02.862 | 70.00th=[ 4228], 80.00th=[ 4555], 90.00th=[ 5800], 95.00th=[ 5866], 00:28:02.862 | 99.00th=[ 6259], 99.50th=[ 6652], 99.90th=[ 7177], 99.95th=[ 7570], 00:28:02.862 | 99.99th=[ 7635] 00:28:02.862 bw ( KiB/s): min=14252, max=15296, per=24.80%, avg=14810.22, stdev=276.05, samples=9 00:28:02.862 iops : min= 1781, max= 1912, avg=1851.22, stdev=34.63, samples=9 00:28:02.862 lat (usec) : 1000=0.01% 00:28:02.862 lat (msec) : 2=0.11%, 4=51.46%, 10=48.42% 00:28:02.862 cpu : usr=91.92%, sys=5.80%, ctx=221, majf=0, minf=9 00:28:02.862 IO depths : 1=0.1%, 2=2.2%, 4=70.0%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 issued rwts: total=9262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.862 filename1: (groupid=0, jobs=1): err= 0: pid=149088: Fri Jul 12 16:04:31 2024 00:28:02.862 read: IOPS=1883, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5002msec) 00:28:02.862 slat (nsec): min=6485, max=57837, avg=11818.85, stdev=5483.02 00:28:02.862 clat (usec): min=1351, max=7703, avg=4210.22, stdev=747.84 00:28:02.862 lat (usec): min=1369, max=7718, avg=4222.04, stdev=747.54 00:28:02.862 clat percentiles (usec): 00:28:02.862 | 1.00th=[ 3032], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3687], 00:28:02.862 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 4015], 60.00th=[ 4113], 00:28:02.862 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 5800], 95.00th=[ 5932], 00:28:02.862 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6783], 99.95th=[ 6980], 00:28:02.862 | 99.99th=[ 7701] 00:28:02.862 bw ( KiB/s): min=14768, max=15408, per=25.23%, avg=15068.60, stdev=232.29, samples=10 00:28:02.862 iops : min= 1846, max= 1926, avg=1883.50, stdev=28.93, samples=10 00:28:02.862 lat (msec) : 2=0.05%, 4=48.00%, 10=51.95% 00:28:02.862 cpu : usr=94.68%, sys=4.86%, ctx=6, majf=0, minf=0 00:28:02.862 IO depths : 1=0.1%, 2=2.5%, 4=70.0%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:02.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:02.862 issued rwts: total=9421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:02.862 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:02.862 00:28:02.862 Run status group 0 (all jobs): 00:28:02.862 READ: bw=58.3MiB/s (61.1MB/s), 14.4MiB/s-14.7MiB/s (15.1MB/s-15.4MB/s), io=292MiB (306MB), run=5001-5003msec 00:28:02.862 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:02.862 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:02.862 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.862 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:02.862 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:02.862 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:02.862 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 00:28:02.863 real 0m24.126s 00:28:02.863 user 4m34.640s 00:28:02.863 sys 0m6.668s 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 ************************************ 00:28:02.863 END TEST fio_dif_rand_params 00:28:02.863 ************************************ 00:28:02.863 16:04:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:02.863 16:04:32 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:02.863 16:04:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:02.863 16:04:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 ************************************ 00:28:02.863 START TEST fio_dif_digest 00:28:02.863 ************************************ 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 bdev_null0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:02.863 [2024-07-12 16:04:32.254738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.863 { 00:28:02.863 "params": { 00:28:02.863 "name": "Nvme$subsystem", 00:28:02.863 "trtype": "$TEST_TRANSPORT", 00:28:02.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.863 "adrfam": "ipv4", 00:28:02.863 "trsvcid": "$NVMF_PORT", 00:28:02.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.863 "hdgst": ${hdgst:-false}, 00:28:02.863 "ddgst": ${ddgst:-false} 00:28:02.863 }, 00:28:02.863 "method": "bdev_nvme_attach_controller" 00:28:02.863 } 00:28:02.863 EOF 00:28:02.863 )") 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.863 "params": { 00:28:02.863 "name": "Nvme0", 00:28:02.863 "trtype": "tcp", 00:28:02.863 "traddr": "10.0.0.2", 00:28:02.863 "adrfam": "ipv4", 00:28:02.863 "trsvcid": "4420", 00:28:02.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.863 "hdgst": true, 00:28:02.863 "ddgst": true 00:28:02.863 }, 00:28:02.863 "method": "bdev_nvme_attach_controller" 00:28:02.863 }' 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:02.863 16:04:32 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:02.863 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:02.863 ... 00:28:02.863 fio-3.35 00:28:02.863 Starting 3 threads 00:28:02.863 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.116 00:28:15.116 filename0: (groupid=0, jobs=1): err= 0: pid=149956: Fri Jul 12 16:04:43 2024 00:28:15.116 read: IOPS=141, BW=17.7MiB/s (18.6MB/s)(178MiB/10045msec) 00:28:15.116 slat (nsec): min=4499, max=48180, avg=17482.21, stdev=5264.74 00:28:15.116 clat (usec): min=8510, max=60939, avg=21079.55, stdev=3747.15 00:28:15.116 lat (usec): min=8523, max=60967, avg=21097.03, stdev=3747.34 00:28:15.116 clat percentiles (usec): 00:28:15.116 | 1.00th=[ 9372], 5.00th=[10814], 10.00th=[18482], 20.00th=[20055], 00:28:15.116 | 30.00th=[20579], 40.00th=[21103], 50.00th=[21627], 60.00th=[22152], 00:28:15.116 | 70.00th=[22676], 80.00th=[23200], 90.00th=[24249], 95.00th=[25035], 00:28:15.116 | 99.00th=[26870], 99.50th=[27395], 99.90th=[47973], 99.95th=[61080], 00:28:15.116 | 99.99th=[61080] 00:28:15.116 bw ( KiB/s): min=16896, max=21248, per=30.68%, avg=18227.20, stdev=1143.66, samples=20 00:28:15.116 iops : min= 132, max= 166, avg=142.40, stdev= 8.93, samples=20 00:28:15.116 lat (msec) : 10=2.81%, 20=16.90%, 50=80.22%, 100=0.07% 00:28:15.116 cpu : usr=92.62%, sys=6.85%, ctx=48, majf=0, minf=151 00:28:15.116 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.116 issued rwts: total=1426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.116 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:15.116 filename0: (groupid=0, jobs=1): err= 0: pid=149957: Fri Jul 12 16:04:43 2024 00:28:15.116 read: IOPS=136, BW=17.0MiB/s (17.9MB/s)(171MiB/10045msec) 00:28:15.116 slat (nsec): min=5099, max=46886, avg=15296.57, stdev=4163.06 00:28:15.116 clat (usec): min=9057, max=63232, avg=21960.35, stdev=4028.20 00:28:15.116 lat (usec): min=9070, max=63245, avg=21975.65, stdev=4028.06 00:28:15.116 clat percentiles (usec): 00:28:15.116 | 1.00th=[ 9765], 5.00th=[10945], 10.00th=[18482], 20.00th=[20579], 00:28:15.116 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22676], 60.00th=[23200], 00:28:15.117 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25297], 95.00th=[26084], 00:28:15.117 | 99.00th=[27919], 99.50th=[28705], 99.90th=[55313], 99.95th=[63177], 00:28:15.117 | 99.99th=[63177] 00:28:15.117 bw ( KiB/s): min=16128, max=19968, per=29.46%, avg=17499.25, stdev=1138.76, samples=20 00:28:15.117 iops : min= 126, max= 156, avg=136.70, stdev= 8.90, samples=20 00:28:15.117 lat (msec) : 10=1.83%, 20=12.05%, 50=85.98%, 100=0.15% 00:28:15.117 cpu : usr=93.13%, sys=6.42%, ctx=26, majf=0, minf=140 00:28:15.117 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.117 issued rwts: total=1369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:15.117 filename0: (groupid=0, jobs=1): err= 0: pid=149958: Fri Jul 12 16:04:43 2024 00:28:15.117 read: IOPS=185, BW=23.2MiB/s (24.4MB/s)(233MiB/10045msec) 00:28:15.117 slat (nsec): min=4670, max=51321, avg=15329.44, stdev=5268.90 00:28:15.117 clat (usec): min=11900, max=57240, avg=16098.01, stdev=5303.83 00:28:15.117 lat (usec): min=11915, max=57253, avg=16113.34, stdev=5303.67 00:28:15.117 clat percentiles (usec): 00:28:15.117 | 1.00th=[13042], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:28:15.117 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:28:15.117 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17433], 00:28:15.117 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56886], 99.95th=[57410], 00:28:15.117 | 99.99th=[57410] 00:28:15.117 bw ( KiB/s): min=20992, max=25600, per=40.18%, avg=23872.00, stdev=1404.32, samples=20 00:28:15.117 iops : min= 164, max= 200, avg=186.50, stdev=10.97, samples=20 00:28:15.117 lat (msec) : 20=98.13%, 50=0.16%, 100=1.71% 00:28:15.117 cpu : usr=90.80%, sys=8.60%, ctx=34, majf=0, minf=103 00:28:15.117 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:15.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:15.117 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:15.117 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:15.117 00:28:15.117 Run status group 0 (all jobs): 00:28:15.117 READ: bw=58.0MiB/s (60.8MB/s), 17.0MiB/s-23.2MiB/s (17.9MB/s-24.4MB/s), io=583MiB (611MB), run=10045-10045msec 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.117 00:28:15.117 real 0m11.178s 00:28:15.117 user 0m28.917s 00:28:15.117 sys 0m2.513s 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:15.117 16:04:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.117 ************************************ 00:28:15.117 END TEST fio_dif_digest 00:28:15.117 ************************************ 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:15.117 16:04:43 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:15.117 16:04:43 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:15.117 rmmod nvme_tcp 00:28:15.117 rmmod nvme_fabrics 00:28:15.117 rmmod nvme_keyring 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 143905 ']' 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 143905 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 143905 ']' 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 143905 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143905 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143905' 00:28:15.117 killing process with pid 143905 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@967 -- # kill 143905 00:28:15.117 16:04:43 nvmf_dif -- common/autotest_common.sh@972 -- # wait 143905 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:15.117 16:04:43 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:15.376 Waiting for block devices as requested 00:28:15.376 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:15.376 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:15.634 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:15.634 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:15.634 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:15.634 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:15.893 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:15.893 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:15.893 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:16.152 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:16.152 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:16.152 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:16.411 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:16.411 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:16.411 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:16.411 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:16.670 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:16.670 16:04:46 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:16.670 16:04:46 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:16.670 16:04:46 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:16.670 16:04:46 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:16.670 16:04:46 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.670 16:04:46 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:16.670 16:04:46 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.198 16:04:48 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:19.198 00:28:19.198 real 1m6.749s 00:28:19.198 user 6m26.591s 00:28:19.198 sys 0m20.256s 00:28:19.198 16:04:48 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:19.198 16:04:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:19.198 ************************************ 00:28:19.198 END TEST nvmf_dif 00:28:19.198 ************************************ 00:28:19.198 16:04:48 -- common/autotest_common.sh@1142 -- # return 0 00:28:19.198 16:04:48 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:19.198 16:04:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:19.198 16:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:19.198 16:04:48 -- common/autotest_common.sh@10 -- # set +x 00:28:19.198 ************************************ 00:28:19.198 START TEST nvmf_abort_qd_sizes 00:28:19.198 ************************************ 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:19.198 * Looking for test storage... 00:28:19.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:19.198 16:04:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.094 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:21.095 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:21.095 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:21.095 Found net devices under 0000:09:00.0: cvl_0_0 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:21.095 Found net devices under 0000:09:00.1: cvl_0_1 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:21.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:28:21.095 00:28:21.095 --- 10.0.0.2 ping statistics --- 00:28:21.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.095 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:28:21.095 00:28:21.095 --- 10.0.0.1 ping statistics --- 00:28:21.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.095 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:21.095 16:04:50 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:22.027 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:22.027 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:22.027 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:22.027 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:22.027 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:22.027 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:22.027 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:22.027 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:22.284 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:23.216 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:23.216 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.216 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:23.216 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:23.216 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.216 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=154773 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 154773 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 154773 ']' 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.474 16:04:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.474 [2024-07-12 16:04:53.016714] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:28:23.474 [2024-07-12 16:04:53.016802] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.474 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.474 [2024-07-12 16:04:53.081957] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.475 [2024-07-12 16:04:53.191727] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.475 [2024-07-12 16:04:53.191776] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.475 [2024-07-12 16:04:53.191801] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.475 [2024-07-12 16:04:53.191812] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.475 [2024-07-12 16:04:53.191822] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.475 [2024-07-12 16:04:53.191979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.475 [2024-07-12 16:04:53.192046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.475 [2024-07-12 16:04:53.192111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.475 [2024-07-12 16:04:53.192114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.732 16:04:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.732 ************************************ 00:28:23.732 START TEST spdk_target_abort 00:28:23.732 ************************************ 00:28:23.732 16:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:23.732 16:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:23.732 16:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:28:23.732 16:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.732 16:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.009 spdk_targetn1 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.009 [2024-07-12 16:04:56.234544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:27.009 [2024-07-12 16:04:56.266802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.009 16:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.009 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.289 Initializing NVMe Controllers 00:28:30.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.289 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.289 Initialization complete. Launching workers. 00:28:30.289 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10658, failed: 0 00:28:30.289 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1311, failed to submit 9347 00:28:30.289 success 728, unsuccess 583, failed 0 00:28:30.289 16:04:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:30.289 16:04:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:30.289 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.630 Initializing NVMe Controllers 00:28:33.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:33.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:33.630 Initialization complete. Launching workers. 00:28:33.630 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8731, failed: 0 00:28:33.630 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7504 00:28:33.630 success 320, unsuccess 907, failed 0 00:28:33.630 16:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:33.630 16:05:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.630 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.907 Initializing NVMe Controllers 00:28:36.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:36.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:36.907 Initialization complete. Launching workers. 00:28:36.907 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30625, failed: 0 00:28:36.907 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2755, failed to submit 27870 00:28:36.907 success 545, unsuccess 2210, failed 0 00:28:36.907 16:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:36.907 16:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.907 16:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.907 16:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.907 16:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:36.907 16:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.907 16:05:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 154773 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 154773 ']' 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 154773 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154773 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154773' 00:28:37.839 killing process with pid 154773 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 154773 00:28:37.839 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 154773 00:28:38.096 00:28:38.096 real 0m14.237s 00:28:38.096 user 0m53.952s 00:28:38.096 sys 0m2.614s 00:28:38.096 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:38.096 16:05:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:38.096 ************************************ 00:28:38.096 END TEST spdk_target_abort 00:28:38.096 ************************************ 00:28:38.096 16:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:38.096 16:05:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:38.096 16:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:38.096 16:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.096 16:05:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:38.096 ************************************ 00:28:38.096 START TEST kernel_target_abort 00:28:38.096 ************************************ 00:28:38.096 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:28:38.096 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:38.096 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:38.096 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.096 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:38.097 16:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:39.472 Waiting for block devices as requested 00:28:39.472 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:39.472 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:39.472 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:39.472 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:39.472 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:39.730 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:39.730 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:39.730 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:39.730 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:39.989 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:39.989 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:40.248 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:40.248 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:40.248 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:40.248 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:40.507 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:40.507 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:40.507 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:40.765 No valid GPT data, bailing 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:28:40.765 00:28:40.765 Discovery Log Number of Records 2, Generation counter 2 00:28:40.765 =====Discovery Log Entry 0====== 00:28:40.765 trtype: tcp 00:28:40.765 adrfam: ipv4 00:28:40.765 subtype: current discovery subsystem 00:28:40.765 treq: not specified, sq flow control disable supported 00:28:40.765 portid: 1 00:28:40.765 trsvcid: 4420 00:28:40.765 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:40.765 traddr: 10.0.0.1 00:28:40.765 eflags: none 00:28:40.765 sectype: none 00:28:40.765 =====Discovery Log Entry 1====== 00:28:40.765 trtype: tcp 00:28:40.765 adrfam: ipv4 00:28:40.765 subtype: nvme subsystem 00:28:40.765 treq: not specified, sq flow control disable supported 00:28:40.765 portid: 1 00:28:40.765 trsvcid: 4420 00:28:40.765 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:40.765 traddr: 10.0.0.1 00:28:40.765 eflags: none 00:28:40.765 sectype: none 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:40.765 16:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:40.765 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.056 Initializing NVMe Controllers 00:28:44.056 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:44.056 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:44.056 Initialization complete. Launching workers. 00:28:44.056 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38611, failed: 0 00:28:44.056 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38611, failed to submit 0 00:28:44.056 success 0, unsuccess 38611, failed 0 00:28:44.056 16:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:44.056 16:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:44.056 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.334 Initializing NVMe Controllers 00:28:47.334 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:47.334 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:47.334 Initialization complete. Launching workers. 00:28:47.334 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 77259, failed: 0 00:28:47.334 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19458, failed to submit 57801 00:28:47.334 success 0, unsuccess 19458, failed 0 00:28:47.334 16:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:47.334 16:05:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:47.334 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.608 Initializing NVMe Controllers 00:28:50.608 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:50.608 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:50.608 Initialization complete. Launching workers. 00:28:50.608 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75318, failed: 0 00:28:50.608 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18806, failed to submit 56512 00:28:50.608 success 0, unsuccess 18806, failed 0 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:50.608 16:05:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:51.173 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:51.173 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:51.173 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:51.173 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:51.173 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:51.432 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:51.432 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:51.432 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:51.432 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:52.368 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:52.368 00:28:52.368 real 0m14.409s 00:28:52.368 user 0m5.104s 00:28:52.368 sys 0m3.429s 00:28:52.368 16:05:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:52.368 16:05:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:52.368 ************************************ 00:28:52.368 END TEST kernel_target_abort 00:28:52.368 ************************************ 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:52.625 rmmod nvme_tcp 00:28:52.625 rmmod nvme_fabrics 00:28:52.625 rmmod nvme_keyring 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:52.625 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 154773 ']' 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 154773 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 154773 ']' 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 154773 00:28:52.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (154773) - No such process 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 154773 is not found' 00:28:52.626 Process with pid 154773 is not found 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:52.626 16:05:22 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:53.595 Waiting for block devices as requested 00:28:53.595 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:53.854 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:53.854 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:53.854 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:54.112 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:54.113 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:54.113 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:54.370 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:54.370 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:28:54.370 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:54.627 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:54.627 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:54.627 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:54.627 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:54.885 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:54.885 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:54.885 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:55.146 16:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.146 16:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.146 16:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.146 16:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.146 16:05:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.146 16:05:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:55.146 16:05:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.055 16:05:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:57.055 00:28:57.055 real 0m38.337s 00:28:57.055 user 1m1.113s 00:28:57.055 sys 0m9.557s 00:28:57.055 16:05:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:57.055 16:05:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:57.055 ************************************ 00:28:57.055 END TEST nvmf_abort_qd_sizes 00:28:57.055 ************************************ 00:28:57.055 16:05:26 -- common/autotest_common.sh@1142 -- # return 0 00:28:57.055 16:05:26 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:57.055 16:05:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:57.055 16:05:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.055 16:05:26 -- common/autotest_common.sh@10 -- # set +x 00:28:57.055 ************************************ 00:28:57.055 START TEST keyring_file 00:28:57.055 ************************************ 00:28:57.055 16:05:26 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:57.314 * Looking for test storage... 00:28:57.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:57.314 16:05:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:57.314 16:05:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:57.314 16:05:26 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:57.314 16:05:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:57.314 16:05:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:57.314 16:05:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:57.314 16:05:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.314 16:05:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.314 16:05:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.314 16:05:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:57.314 16:05:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fR5dHdqTxN 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fR5dHdqTxN 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fR5dHdqTxN 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.fR5dHdqTxN 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TtgirHkXGL 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:57.315 16:05:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TtgirHkXGL 00:28:57.315 16:05:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TtgirHkXGL 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TtgirHkXGL 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=160630 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:57.315 16:05:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 160630 00:28:57.315 16:05:26 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 160630 ']' 00:28:57.315 16:05:26 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.315 16:05:26 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.315 16:05:26 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.315 16:05:26 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.315 16:05:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:57.315 [2024-07-12 16:05:26.998369] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:28:57.315 [2024-07-12 16:05:26.998454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160630 ] 00:28:57.315 EAL: No free 2048 kB hugepages reported on node 1 00:28:57.573 [2024-07-12 16:05:27.056829] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.573 [2024-07-12 16:05:27.167432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.830 16:05:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:57.830 16:05:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:57.830 16:05:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:57.830 16:05:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:57.831 [2024-07-12 16:05:27.425799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.831 null0 00:28:57.831 [2024-07-12 16:05:27.457840] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:57.831 [2024-07-12 16:05:27.458262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:57.831 [2024-07-12 16:05:27.465852] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:57.831 16:05:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:57.831 [2024-07-12 16:05:27.473861] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:57.831 request: 00:28:57.831 { 00:28:57.831 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:57.831 "secure_channel": false, 00:28:57.831 "listen_address": { 00:28:57.831 "trtype": "tcp", 00:28:57.831 "traddr": "127.0.0.1", 00:28:57.831 "trsvcid": "4420" 00:28:57.831 }, 00:28:57.831 "method": "nvmf_subsystem_add_listener", 00:28:57.831 "req_id": 1 00:28:57.831 } 00:28:57.831 Got JSON-RPC error response 00:28:57.831 response: 00:28:57.831 { 00:28:57.831 "code": -32602, 00:28:57.831 "message": "Invalid parameters" 00:28:57.831 } 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:57.831 16:05:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=160645 00:28:57.831 16:05:27 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:57.831 16:05:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 160645 /var/tmp/bperf.sock 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 160645 ']' 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:57.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:57.831 16:05:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:57.831 [2024-07-12 16:05:27.519127] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:28:57.831 [2024-07-12 16:05:27.519191] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160645 ] 00:28:57.831 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.089 [2024-07-12 16:05:27.575396] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.089 [2024-07-12 16:05:27.682781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.089 16:05:27 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:58.089 16:05:27 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:58.089 16:05:27 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:28:58.089 16:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:28:58.346 16:05:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TtgirHkXGL 00:28:58.346 16:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TtgirHkXGL 00:28:58.605 16:05:28 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:58.605 16:05:28 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:58.605 16:05:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.605 16:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.605 16:05:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:58.862 16:05:28 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.fR5dHdqTxN == \/\t\m\p\/\t\m\p\.\f\R\5\d\H\d\q\T\x\N ]] 00:28:58.862 16:05:28 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:58.862 16:05:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:58.862 16:05:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.862 16:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.862 16:05:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:59.120 16:05:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.TtgirHkXGL == \/\t\m\p\/\t\m\p\.\T\t\g\i\r\H\k\X\G\L ]] 00:28:59.120 16:05:28 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:59.120 16:05:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.120 16:05:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.120 16:05:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.120 16:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.120 16:05:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.377 16:05:29 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:59.377 16:05:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:59.377 16:05:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:59.377 16:05:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.377 16:05:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.377 16:05:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:59.377 16:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.635 16:05:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:59.635 16:05:29 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.635 16:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.895 [2024-07-12 16:05:29.475647] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:59.895 nvme0n1 00:28:59.895 16:05:29 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:59.895 16:05:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.895 16:05:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.895 16:05:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.895 16:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.895 16:05:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.153 16:05:29 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:00.153 16:05:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:00.153 16:05:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:00.153 16:05:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.153 16:05:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.153 16:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.153 16:05:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:00.410 16:05:30 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:00.410 16:05:30 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:00.668 Running I/O for 1 seconds... 00:29:01.601 00:29:01.601 Latency(us) 00:29:01.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.601 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:01.601 nvme0n1 : 1.02 5134.26 20.06 0.00 0.00 24657.20 4466.16 27573.67 00:29:01.601 =================================================================================================================== 00:29:01.601 Total : 5134.26 20.06 0.00 0.00 24657.20 4466.16 27573.67 00:29:01.601 0 00:29:01.601 16:05:31 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:01.601 16:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:01.858 16:05:31 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:01.858 16:05:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:01.858 16:05:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.858 16:05:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.858 16:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.858 16:05:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:02.116 16:05:31 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:02.116 16:05:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:02.116 16:05:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:02.116 16:05:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.116 16:05:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.116 16:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.116 16:05:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:02.373 16:05:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:02.374 16:05:31 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:02.374 16:05:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:02.374 16:05:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:02.374 16:05:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:02.374 16:05:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:02.374 16:05:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:02.374 16:05:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:02.374 16:05:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:02.374 16:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:02.632 [2024-07-12 16:05:32.182649] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:02.632 [2024-07-12 16:05:32.183220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255d430 (107): Transport endpoint is not connected 00:29:02.632 [2024-07-12 16:05:32.184211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x255d430 (9): Bad file descriptor 00:29:02.632 [2024-07-12 16:05:32.185211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:02.632 [2024-07-12 16:05:32.185237] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:02.632 [2024-07-12 16:05:32.185260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:02.632 request: 00:29:02.632 { 00:29:02.632 "name": "nvme0", 00:29:02.632 "trtype": "tcp", 00:29:02.632 "traddr": "127.0.0.1", 00:29:02.632 "adrfam": "ipv4", 00:29:02.632 "trsvcid": "4420", 00:29:02.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.632 "prchk_reftag": false, 00:29:02.632 "prchk_guard": false, 00:29:02.632 "hdgst": false, 00:29:02.632 "ddgst": false, 00:29:02.632 "psk": "key1", 00:29:02.632 "method": "bdev_nvme_attach_controller", 00:29:02.632 "req_id": 1 00:29:02.632 } 00:29:02.632 Got JSON-RPC error response 00:29:02.632 response: 00:29:02.632 { 00:29:02.632 "code": -5, 00:29:02.632 "message": "Input/output error" 00:29:02.632 } 00:29:02.632 16:05:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:02.632 16:05:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:02.632 16:05:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:02.632 16:05:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:02.632 16:05:32 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:02.632 16:05:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:02.632 16:05:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.632 16:05:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.632 16:05:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:02.632 16:05:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.889 16:05:32 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:02.889 16:05:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:02.889 16:05:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:02.889 16:05:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:02.889 16:05:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:02.889 16:05:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:02.889 16:05:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:03.146 16:05:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:03.146 16:05:32 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:03.146 16:05:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:03.404 16:05:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:03.404 16:05:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:03.661 16:05:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:03.661 16:05:33 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:03.661 16:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.918 16:05:33 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:03.918 16:05:33 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.fR5dHdqTxN 00:29:03.918 16:05:33 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:29:03.918 16:05:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:03.918 16:05:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:29:03.918 16:05:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:03.918 16:05:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:03.918 16:05:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:03.918 16:05:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:03.918 16:05:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:29:03.918 16:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:29:04.175 [2024-07-12 16:05:33.658839] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fR5dHdqTxN': 0100660 00:29:04.175 [2024-07-12 16:05:33.658872] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:04.175 request: 00:29:04.175 { 00:29:04.175 "name": "key0", 00:29:04.175 "path": "/tmp/tmp.fR5dHdqTxN", 00:29:04.175 "method": "keyring_file_add_key", 00:29:04.175 "req_id": 1 00:29:04.175 } 00:29:04.175 Got JSON-RPC error response 00:29:04.175 response: 00:29:04.175 { 00:29:04.175 "code": -1, 00:29:04.175 "message": "Operation not permitted" 00:29:04.175 } 00:29:04.175 16:05:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:04.175 16:05:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:04.175 16:05:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:04.175 16:05:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:04.175 16:05:33 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.fR5dHdqTxN 00:29:04.175 16:05:33 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:29:04.175 16:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fR5dHdqTxN 00:29:04.432 16:05:33 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.fR5dHdqTxN 00:29:04.432 16:05:33 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:04.432 16:05:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:04.432 16:05:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:04.432 16:05:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:04.432 16:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:04.432 16:05:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:04.690 16:05:34 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:04.691 16:05:34 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.691 16:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:04.691 [2024-07-12 16:05:34.396876] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.fR5dHdqTxN': No such file or directory 00:29:04.691 [2024-07-12 16:05:34.396914] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:04.691 [2024-07-12 16:05:34.396950] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:04.691 [2024-07-12 16:05:34.396969] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:04.691 [2024-07-12 16:05:34.396981] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:04.691 request: 00:29:04.691 { 00:29:04.691 "name": "nvme0", 00:29:04.691 "trtype": "tcp", 00:29:04.691 "traddr": "127.0.0.1", 00:29:04.691 "adrfam": "ipv4", 00:29:04.691 "trsvcid": "4420", 00:29:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:04.691 "prchk_reftag": false, 00:29:04.691 "prchk_guard": false, 00:29:04.691 "hdgst": false, 00:29:04.691 "ddgst": false, 00:29:04.691 "psk": "key0", 00:29:04.691 "method": "bdev_nvme_attach_controller", 00:29:04.691 "req_id": 1 00:29:04.691 } 00:29:04.691 Got JSON-RPC error response 00:29:04.691 response: 00:29:04.691 { 00:29:04.691 "code": -19, 00:29:04.691 "message": "No such device" 00:29:04.691 } 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:04.691 16:05:34 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:04.691 16:05:34 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:04.691 16:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:04.949 16:05:34 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:04.949 16:05:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:04.949 16:05:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:04.949 16:05:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:04.949 16:05:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:04.949 16:05:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:05.206 16:05:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rRbo95gqnE 00:29:05.206 16:05:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:05.206 16:05:34 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:05.206 16:05:34 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:05.206 16:05:34 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:05.206 16:05:34 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:05.206 16:05:34 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:05.206 16:05:34 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:05.206 16:05:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rRbo95gqnE 00:29:05.206 16:05:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rRbo95gqnE 00:29:05.206 16:05:34 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.rRbo95gqnE 00:29:05.206 16:05:34 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rRbo95gqnE 00:29:05.206 16:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rRbo95gqnE 00:29:05.464 16:05:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.464 16:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.721 nvme0n1 00:29:05.721 16:05:35 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:05.721 16:05:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:05.721 16:05:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.721 16:05:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.721 16:05:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.721 16:05:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.978 16:05:35 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:05.978 16:05:35 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:05.978 16:05:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:06.236 16:05:35 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:06.236 16:05:35 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:06.236 16:05:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.236 16:05:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.236 16:05:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:06.493 16:05:36 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:06.493 16:05:36 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:06.493 16:05:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:06.493 16:05:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:06.493 16:05:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.493 16:05:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.493 16:05:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:06.751 16:05:36 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:06.751 16:05:36 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:06.751 16:05:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:07.008 16:05:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:07.008 16:05:36 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:07.008 16:05:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.265 16:05:36 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:07.265 16:05:36 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rRbo95gqnE 00:29:07.265 16:05:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rRbo95gqnE 00:29:07.554 16:05:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TtgirHkXGL 00:29:07.554 16:05:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TtgirHkXGL 00:29:07.812 16:05:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:07.812 16:05:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.069 nvme0n1 00:29:08.069 16:05:37 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:08.069 16:05:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:08.328 16:05:37 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:08.328 "subsystems": [ 00:29:08.328 { 00:29:08.328 "subsystem": "keyring", 00:29:08.328 "config": [ 00:29:08.328 { 00:29:08.328 "method": "keyring_file_add_key", 00:29:08.328 "params": { 00:29:08.328 "name": "key0", 00:29:08.328 "path": "/tmp/tmp.rRbo95gqnE" 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "keyring_file_add_key", 00:29:08.328 "params": { 00:29:08.328 "name": "key1", 00:29:08.328 "path": "/tmp/tmp.TtgirHkXGL" 00:29:08.328 } 00:29:08.328 } 00:29:08.328 ] 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "subsystem": "iobuf", 00:29:08.328 "config": [ 00:29:08.328 { 00:29:08.328 "method": "iobuf_set_options", 00:29:08.328 "params": { 00:29:08.328 "small_pool_count": 8192, 00:29:08.328 "large_pool_count": 1024, 00:29:08.328 "small_bufsize": 8192, 00:29:08.328 "large_bufsize": 135168 00:29:08.328 } 00:29:08.328 } 00:29:08.328 ] 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "subsystem": "sock", 00:29:08.328 "config": [ 00:29:08.328 { 00:29:08.328 "method": "sock_set_default_impl", 00:29:08.328 "params": { 00:29:08.328 "impl_name": "posix" 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "sock_impl_set_options", 00:29:08.328 "params": { 00:29:08.328 "impl_name": "ssl", 00:29:08.328 "recv_buf_size": 4096, 00:29:08.328 "send_buf_size": 4096, 00:29:08.328 "enable_recv_pipe": true, 00:29:08.328 "enable_quickack": false, 00:29:08.328 "enable_placement_id": 0, 00:29:08.328 "enable_zerocopy_send_server": true, 00:29:08.328 "enable_zerocopy_send_client": false, 00:29:08.328 "zerocopy_threshold": 0, 00:29:08.328 "tls_version": 0, 00:29:08.328 "enable_ktls": false 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "sock_impl_set_options", 00:29:08.328 "params": { 00:29:08.328 "impl_name": "posix", 00:29:08.328 "recv_buf_size": 2097152, 00:29:08.328 "send_buf_size": 2097152, 00:29:08.328 "enable_recv_pipe": true, 00:29:08.328 "enable_quickack": false, 00:29:08.328 "enable_placement_id": 0, 00:29:08.328 "enable_zerocopy_send_server": true, 00:29:08.328 "enable_zerocopy_send_client": false, 00:29:08.328 "zerocopy_threshold": 0, 00:29:08.328 "tls_version": 0, 00:29:08.328 "enable_ktls": false 00:29:08.328 } 00:29:08.328 } 00:29:08.328 ] 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "subsystem": "vmd", 00:29:08.328 "config": [] 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "subsystem": "accel", 00:29:08.328 "config": [ 00:29:08.328 { 00:29:08.328 "method": "accel_set_options", 00:29:08.328 "params": { 00:29:08.328 "small_cache_size": 128, 00:29:08.328 "large_cache_size": 16, 00:29:08.328 "task_count": 2048, 00:29:08.328 "sequence_count": 2048, 00:29:08.328 "buf_count": 2048 00:29:08.328 } 00:29:08.328 } 00:29:08.328 ] 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "subsystem": "bdev", 00:29:08.328 "config": [ 00:29:08.328 { 00:29:08.328 "method": "bdev_set_options", 00:29:08.328 "params": { 00:29:08.328 "bdev_io_pool_size": 65535, 00:29:08.328 "bdev_io_cache_size": 256, 00:29:08.328 "bdev_auto_examine": true, 00:29:08.328 "iobuf_small_cache_size": 128, 00:29:08.328 "iobuf_large_cache_size": 16 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "bdev_raid_set_options", 00:29:08.328 "params": { 00:29:08.328 "process_window_size_kb": 1024 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "bdev_iscsi_set_options", 00:29:08.328 "params": { 00:29:08.328 "timeout_sec": 30 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "bdev_nvme_set_options", 00:29:08.328 "params": { 00:29:08.328 "action_on_timeout": "none", 00:29:08.328 "timeout_us": 0, 00:29:08.328 "timeout_admin_us": 0, 00:29:08.328 "keep_alive_timeout_ms": 10000, 00:29:08.328 "arbitration_burst": 0, 00:29:08.328 "low_priority_weight": 0, 00:29:08.328 "medium_priority_weight": 0, 00:29:08.328 "high_priority_weight": 0, 00:29:08.328 "nvme_adminq_poll_period_us": 10000, 00:29:08.328 "nvme_ioq_poll_period_us": 0, 00:29:08.328 "io_queue_requests": 512, 00:29:08.328 "delay_cmd_submit": true, 00:29:08.328 "transport_retry_count": 4, 00:29:08.328 "bdev_retry_count": 3, 00:29:08.328 "transport_ack_timeout": 0, 00:29:08.328 "ctrlr_loss_timeout_sec": 0, 00:29:08.328 "reconnect_delay_sec": 0, 00:29:08.328 "fast_io_fail_timeout_sec": 0, 00:29:08.328 "disable_auto_failback": false, 00:29:08.328 "generate_uuids": false, 00:29:08.328 "transport_tos": 0, 00:29:08.328 "nvme_error_stat": false, 00:29:08.328 "rdma_srq_size": 0, 00:29:08.328 "io_path_stat": false, 00:29:08.328 "allow_accel_sequence": false, 00:29:08.328 "rdma_max_cq_size": 0, 00:29:08.328 "rdma_cm_event_timeout_ms": 0, 00:29:08.328 "dhchap_digests": [ 00:29:08.328 "sha256", 00:29:08.328 "sha384", 00:29:08.328 "sha512" 00:29:08.328 ], 00:29:08.328 "dhchap_dhgroups": [ 00:29:08.328 "null", 00:29:08.328 "ffdhe2048", 00:29:08.328 "ffdhe3072", 00:29:08.328 "ffdhe4096", 00:29:08.328 "ffdhe6144", 00:29:08.328 "ffdhe8192" 00:29:08.328 ] 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "bdev_nvme_attach_controller", 00:29:08.328 "params": { 00:29:08.328 "name": "nvme0", 00:29:08.328 "trtype": "TCP", 00:29:08.328 "adrfam": "IPv4", 00:29:08.328 "traddr": "127.0.0.1", 00:29:08.328 "trsvcid": "4420", 00:29:08.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.328 "prchk_reftag": false, 00:29:08.328 "prchk_guard": false, 00:29:08.328 "ctrlr_loss_timeout_sec": 0, 00:29:08.328 "reconnect_delay_sec": 0, 00:29:08.328 "fast_io_fail_timeout_sec": 0, 00:29:08.328 "psk": "key0", 00:29:08.328 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.328 "hdgst": false, 00:29:08.328 "ddgst": false 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "bdev_nvme_set_hotplug", 00:29:08.328 "params": { 00:29:08.328 "period_us": 100000, 00:29:08.328 "enable": false 00:29:08.328 } 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "method": "bdev_wait_for_examine" 00:29:08.328 } 00:29:08.328 ] 00:29:08.328 }, 00:29:08.328 { 00:29:08.328 "subsystem": "nbd", 00:29:08.328 "config": [] 00:29:08.328 } 00:29:08.328 ] 00:29:08.328 }' 00:29:08.328 16:05:37 keyring_file -- keyring/file.sh@114 -- # killprocess 160645 00:29:08.328 16:05:37 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 160645 ']' 00:29:08.328 16:05:37 keyring_file -- common/autotest_common.sh@952 -- # kill -0 160645 00:29:08.328 16:05:37 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:08.328 16:05:37 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:08.328 16:05:37 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160645 00:29:08.328 16:05:37 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:08.328 16:05:37 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:08.329 16:05:37 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160645' 00:29:08.329 killing process with pid 160645 00:29:08.329 16:05:37 keyring_file -- common/autotest_common.sh@967 -- # kill 160645 00:29:08.329 Received shutdown signal, test time was about 1.000000 seconds 00:29:08.329 00:29:08.329 Latency(us) 00:29:08.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.329 =================================================================================================================== 00:29:08.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.329 16:05:37 keyring_file -- common/autotest_common.sh@972 -- # wait 160645 00:29:08.588 16:05:38 keyring_file -- keyring/file.sh@117 -- # bperfpid=162098 00:29:08.588 16:05:38 keyring_file -- keyring/file.sh@119 -- # waitforlisten 162098 /var/tmp/bperf.sock 00:29:08.588 16:05:38 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 162098 ']' 00:29:08.588 16:05:38 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.588 16:05:38 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:08.588 16:05:38 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:08.588 16:05:38 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.588 16:05:38 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:08.588 16:05:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:08.588 16:05:38 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:08.588 "subsystems": [ 00:29:08.588 { 00:29:08.588 "subsystem": "keyring", 00:29:08.588 "config": [ 00:29:08.588 { 00:29:08.588 "method": "keyring_file_add_key", 00:29:08.588 "params": { 00:29:08.588 "name": "key0", 00:29:08.588 "path": "/tmp/tmp.rRbo95gqnE" 00:29:08.588 } 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "method": "keyring_file_add_key", 00:29:08.588 "params": { 00:29:08.588 "name": "key1", 00:29:08.588 "path": "/tmp/tmp.TtgirHkXGL" 00:29:08.588 } 00:29:08.588 } 00:29:08.588 ] 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "subsystem": "iobuf", 00:29:08.588 "config": [ 00:29:08.588 { 00:29:08.588 "method": "iobuf_set_options", 00:29:08.588 "params": { 00:29:08.588 "small_pool_count": 8192, 00:29:08.588 "large_pool_count": 1024, 00:29:08.588 "small_bufsize": 8192, 00:29:08.588 "large_bufsize": 135168 00:29:08.588 } 00:29:08.588 } 00:29:08.588 ] 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "subsystem": "sock", 00:29:08.588 "config": [ 00:29:08.588 { 00:29:08.588 "method": "sock_set_default_impl", 00:29:08.588 "params": { 00:29:08.588 "impl_name": "posix" 00:29:08.588 } 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "method": "sock_impl_set_options", 00:29:08.588 "params": { 00:29:08.588 "impl_name": "ssl", 00:29:08.588 "recv_buf_size": 4096, 00:29:08.588 "send_buf_size": 4096, 00:29:08.588 "enable_recv_pipe": true, 00:29:08.588 "enable_quickack": false, 00:29:08.588 "enable_placement_id": 0, 00:29:08.588 "enable_zerocopy_send_server": true, 00:29:08.588 "enable_zerocopy_send_client": false, 00:29:08.588 "zerocopy_threshold": 0, 00:29:08.588 "tls_version": 0, 00:29:08.588 "enable_ktls": false 00:29:08.588 } 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "method": "sock_impl_set_options", 00:29:08.588 "params": { 00:29:08.588 "impl_name": "posix", 00:29:08.588 "recv_buf_size": 2097152, 00:29:08.588 "send_buf_size": 2097152, 00:29:08.588 "enable_recv_pipe": true, 00:29:08.588 "enable_quickack": false, 00:29:08.588 "enable_placement_id": 0, 00:29:08.588 "enable_zerocopy_send_server": true, 00:29:08.588 "enable_zerocopy_send_client": false, 00:29:08.588 "zerocopy_threshold": 0, 00:29:08.588 "tls_version": 0, 00:29:08.588 "enable_ktls": false 00:29:08.588 } 00:29:08.588 } 00:29:08.588 ] 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "subsystem": "vmd", 00:29:08.588 "config": [] 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "subsystem": "accel", 00:29:08.588 "config": [ 00:29:08.588 { 00:29:08.588 "method": "accel_set_options", 00:29:08.588 "params": { 00:29:08.588 "small_cache_size": 128, 00:29:08.588 "large_cache_size": 16, 00:29:08.588 "task_count": 2048, 00:29:08.588 "sequence_count": 2048, 00:29:08.588 "buf_count": 2048 00:29:08.588 } 00:29:08.588 } 00:29:08.588 ] 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "subsystem": "bdev", 00:29:08.588 "config": [ 00:29:08.588 { 00:29:08.588 "method": "bdev_set_options", 00:29:08.588 "params": { 00:29:08.588 "bdev_io_pool_size": 65535, 00:29:08.588 "bdev_io_cache_size": 256, 00:29:08.588 "bdev_auto_examine": true, 00:29:08.588 "iobuf_small_cache_size": 128, 00:29:08.588 "iobuf_large_cache_size": 16 00:29:08.588 } 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "method": "bdev_raid_set_options", 00:29:08.588 "params": { 00:29:08.588 "process_window_size_kb": 1024 00:29:08.588 } 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "method": "bdev_iscsi_set_options", 00:29:08.588 "params": { 00:29:08.588 "timeout_sec": 30 00:29:08.588 } 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "method": "bdev_nvme_set_options", 00:29:08.588 "params": { 00:29:08.588 "action_on_timeout": "none", 00:29:08.588 "timeout_us": 0, 00:29:08.588 "timeout_admin_us": 0, 00:29:08.588 "keep_alive_timeout_ms": 10000, 00:29:08.588 "arbitration_burst": 0, 00:29:08.588 "low_priority_weight": 0, 00:29:08.588 "medium_priority_weight": 0, 00:29:08.588 "high_priority_weight": 0, 00:29:08.588 "nvme_adminq_poll_period_us": 10000, 00:29:08.588 "nvme_ioq_poll_period_us": 0, 00:29:08.588 "io_queue_requests": 512, 00:29:08.588 "delay_cmd_submit": true, 00:29:08.588 "transport_retry_count": 4, 00:29:08.588 "bdev_retry_count": 3, 00:29:08.588 "transport_ack_timeout": 0, 00:29:08.588 "ctrlr_loss_timeout_sec": 0, 00:29:08.588 "reconnect_delay_sec": 0, 00:29:08.588 "fast_io_fail_timeout_sec": 0, 00:29:08.588 "disable_auto_failback": false, 00:29:08.588 "generate_uuids": false, 00:29:08.588 "transport_tos": 0, 00:29:08.588 "nvme_error_stat": false, 00:29:08.588 "rdma_srq_size": 0, 00:29:08.588 "io_path_stat": false, 00:29:08.588 "allow_accel_sequence": false, 00:29:08.588 "rdma_max_cq_size": 0, 00:29:08.588 "rdma_cm_event_timeout_ms": 0, 00:29:08.588 "dhchap_digests": [ 00:29:08.588 "sha256", 00:29:08.588 "sha384", 00:29:08.588 "sha512" 00:29:08.588 ], 00:29:08.588 "dhchap_dhgroups": [ 00:29:08.588 "null", 00:29:08.588 "ffdhe2048", 00:29:08.588 "ffdhe3072", 00:29:08.588 "ffdhe4096", 00:29:08.588 "ffdhe6144", 00:29:08.588 "ffdhe8192" 00:29:08.588 ] 00:29:08.588 } 00:29:08.588 }, 00:29:08.588 { 00:29:08.588 "method": "bdev_nvme_attach_controller", 00:29:08.588 "params": { 00:29:08.588 "name": "nvme0", 00:29:08.588 "trtype": "TCP", 00:29:08.588 "adrfam": "IPv4", 00:29:08.588 "traddr": "127.0.0.1", 00:29:08.588 "trsvcid": "4420", 00:29:08.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.588 "prchk_reftag": false, 00:29:08.588 "prchk_guard": false, 00:29:08.588 "ctrlr_loss_timeout_sec": 0, 00:29:08.588 "reconnect_delay_sec": 0, 00:29:08.588 "fast_io_fail_timeout_sec": 0, 00:29:08.588 "psk": "key0", 00:29:08.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.588 "hdgst": false, 00:29:08.589 "ddgst": false 00:29:08.589 } 00:29:08.589 }, 00:29:08.589 { 00:29:08.589 "method": "bdev_nvme_set_hotplug", 00:29:08.589 "params": { 00:29:08.589 "period_us": 100000, 00:29:08.589 "enable": false 00:29:08.589 } 00:29:08.589 }, 00:29:08.589 { 00:29:08.589 "method": "bdev_wait_for_examine" 00:29:08.589 } 00:29:08.589 ] 00:29:08.589 }, 00:29:08.589 { 00:29:08.589 "subsystem": "nbd", 00:29:08.589 "config": [] 00:29:08.589 } 00:29:08.589 ] 00:29:08.589 }' 00:29:08.589 [2024-07-12 16:05:38.191546] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:29:08.589 [2024-07-12 16:05:38.191650] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162098 ] 00:29:08.589 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.589 [2024-07-12 16:05:38.250403] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.847 [2024-07-12 16:05:38.359706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.847 [2024-07-12 16:05:38.535915] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:09.779 16:05:39 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.779 16:05:39 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:09.779 16:05:39 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:09.779 16:05:39 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:09.780 16:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.780 16:05:39 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:09.780 16:05:39 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:09.780 16:05:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:09.780 16:05:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.780 16:05:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.780 16:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.780 16:05:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.037 16:05:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:10.037 16:05:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:10.037 16:05:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:10.037 16:05:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.037 16:05:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.037 16:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.037 16:05:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:10.296 16:05:39 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:10.296 16:05:39 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:10.296 16:05:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:10.296 16:05:39 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:10.553 16:05:40 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:10.553 16:05:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:10.553 16:05:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.rRbo95gqnE /tmp/tmp.TtgirHkXGL 00:29:10.553 16:05:40 keyring_file -- keyring/file.sh@20 -- # killprocess 162098 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 162098 ']' 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 162098 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162098 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162098' 00:29:10.553 killing process with pid 162098 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@967 -- # kill 162098 00:29:10.553 Received shutdown signal, test time was about 1.000000 seconds 00:29:10.553 00:29:10.553 Latency(us) 00:29:10.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.553 =================================================================================================================== 00:29:10.553 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:10.553 16:05:40 keyring_file -- common/autotest_common.sh@972 -- # wait 162098 00:29:10.811 16:05:40 keyring_file -- keyring/file.sh@21 -- # killprocess 160630 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 160630 ']' 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@952 -- # kill -0 160630 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160630 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160630' 00:29:10.811 killing process with pid 160630 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@967 -- # kill 160630 00:29:10.811 [2024-07-12 16:05:40.468739] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:10.811 16:05:40 keyring_file -- common/autotest_common.sh@972 -- # wait 160630 00:29:11.377 00:29:11.377 real 0m14.117s 00:29:11.377 user 0m34.753s 00:29:11.377 sys 0m3.266s 00:29:11.377 16:05:40 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:11.377 16:05:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:11.377 ************************************ 00:29:11.377 END TEST keyring_file 00:29:11.377 ************************************ 00:29:11.377 16:05:40 -- common/autotest_common.sh@1142 -- # return 0 00:29:11.377 16:05:40 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:11.377 16:05:40 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:11.377 16:05:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:11.377 16:05:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.377 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:29:11.377 ************************************ 00:29:11.377 START TEST keyring_linux 00:29:11.377 ************************************ 00:29:11.377 16:05:40 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:11.377 * Looking for test storage... 00:29:11.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:11.377 16:05:40 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:11.377 16:05:40 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.377 16:05:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:11.377 16:05:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.377 16:05:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.377 16:05:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.378 16:05:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.378 16:05:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.378 16:05:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.378 16:05:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.378 16:05:40 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.378 16:05:40 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.378 16:05:41 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.378 16:05:41 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.378 16:05:41 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.378 16:05:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.378 16:05:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.378 16:05:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.378 16:05:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:11.378 16:05:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:11.378 /tmp/:spdk-test:key0 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:11.378 16:05:41 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:11.378 16:05:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:11.378 /tmp/:spdk-test:key1 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=162462 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:11.378 16:05:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 162462 00:29:11.378 16:05:41 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 162462 ']' 00:29:11.378 16:05:41 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.378 16:05:41 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:11.378 16:05:41 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.378 16:05:41 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:11.378 16:05:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:11.637 [2024-07-12 16:05:41.150373] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:29:11.637 [2024-07-12 16:05:41.150450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162462 ] 00:29:11.637 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.637 [2024-07-12 16:05:41.206387] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.637 [2024-07-12 16:05:41.316993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.895 16:05:41 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:11.895 16:05:41 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:11.895 16:05:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:11.895 16:05:41 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.895 16:05:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:11.895 [2024-07-12 16:05:41.572052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.895 null0 00:29:11.895 [2024-07-12 16:05:41.604107] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:11.895 [2024-07-12 16:05:41.604560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:12.153 16:05:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:12.153 1043318172 00:29:12.153 16:05:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:12.153 580975398 00:29:12.153 16:05:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=162592 00:29:12.153 16:05:41 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:12.153 16:05:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 162592 /var/tmp/bperf.sock 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 162592 ']' 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:12.153 [2024-07-12 16:05:41.667930] Starting SPDK v24.09-pre git sha1 26acb15a6 / DPDK 24.03.0 initialization... 00:29:12.153 [2024-07-12 16:05:41.667994] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162592 ] 00:29:12.153 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.153 [2024-07-12 16:05:41.724841] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.153 [2024-07-12 16:05:41.838759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:12.153 16:05:41 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:12.153 16:05:41 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:12.153 16:05:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:12.411 16:05:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:12.411 16:05:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:12.977 16:05:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:12.977 16:05:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:12.977 [2024-07-12 16:05:42.684523] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:13.235 nvme0n1 00:29:13.235 16:05:42 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:13.235 16:05:42 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:13.235 16:05:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:13.235 16:05:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:13.235 16:05:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:13.235 16:05:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.493 16:05:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:13.493 16:05:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:13.493 16:05:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:13.493 16:05:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:13.493 16:05:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.493 16:05:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.493 16:05:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:13.750 16:05:43 keyring_linux -- keyring/linux.sh@25 -- # sn=1043318172 00:29:13.750 16:05:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:13.750 16:05:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:13.750 16:05:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 1043318172 == \1\0\4\3\3\1\8\1\7\2 ]] 00:29:13.750 16:05:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1043318172 00:29:13.751 16:05:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:13.751 16:05:43 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.751 Running I/O for 1 seconds... 00:29:14.683 00:29:14.684 Latency(us) 00:29:14.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.684 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:14.684 nvme0n1 : 1.02 5017.42 19.60 0.00 0.00 25287.21 11165.39 37476.88 00:29:14.684 =================================================================================================================== 00:29:14.684 Total : 5017.42 19.60 0.00 0.00 25287.21 11165.39 37476.88 00:29:14.684 0 00:29:14.684 16:05:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:14.684 16:05:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:14.941 16:05:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:14.941 16:05:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:14.941 16:05:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:14.941 16:05:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:14.941 16:05:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.941 16:05:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:15.199 16:05:44 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:15.199 16:05:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:15.199 16:05:44 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:15.199 16:05:44 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:15.199 16:05:44 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:15.199 16:05:44 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:15.199 16:05:44 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:15.199 16:05:44 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.199 16:05:44 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:15.199 16:05:44 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:15.199 16:05:44 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:15.199 16:05:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:15.456 [2024-07-12 16:05:45.127245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:15.456 [2024-07-12 16:05:45.128169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10182b0 (107): Transport endpoint is not connected 00:29:15.456 [2024-07-12 16:05:45.129172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10182b0 (9): Bad file descriptor 00:29:15.457 [2024-07-12 16:05:45.130162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:15.457 [2024-07-12 16:05:45.130183] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:15.457 [2024-07-12 16:05:45.130195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:15.457 request: 00:29:15.457 { 00:29:15.457 "name": "nvme0", 00:29:15.457 "trtype": "tcp", 00:29:15.457 "traddr": "127.0.0.1", 00:29:15.457 "adrfam": "ipv4", 00:29:15.457 "trsvcid": "4420", 00:29:15.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.457 "prchk_reftag": false, 00:29:15.457 "prchk_guard": false, 00:29:15.457 "hdgst": false, 00:29:15.457 "ddgst": false, 00:29:15.457 "psk": ":spdk-test:key1", 00:29:15.457 "method": "bdev_nvme_attach_controller", 00:29:15.457 "req_id": 1 00:29:15.457 } 00:29:15.457 Got JSON-RPC error response 00:29:15.457 response: 00:29:15.457 { 00:29:15.457 "code": -5, 00:29:15.457 "message": "Input/output error" 00:29:15.457 } 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@33 -- # sn=1043318172 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1043318172 00:29:15.457 1 links removed 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@33 -- # sn=580975398 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 580975398 00:29:15.457 1 links removed 00:29:15.457 16:05:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 162592 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 162592 ']' 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 162592 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162592 00:29:15.457 16:05:45 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:15.715 16:05:45 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:15.715 16:05:45 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162592' 00:29:15.715 killing process with pid 162592 00:29:15.715 16:05:45 keyring_linux -- common/autotest_common.sh@967 -- # kill 162592 00:29:15.715 Received shutdown signal, test time was about 1.000000 seconds 00:29:15.715 00:29:15.715 Latency(us) 00:29:15.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.715 =================================================================================================================== 00:29:15.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.715 16:05:45 keyring_linux -- common/autotest_common.sh@972 -- # wait 162592 00:29:15.715 16:05:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 162462 00:29:15.715 16:05:45 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 162462 ']' 00:29:15.715 16:05:45 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 162462 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162462 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162462' 00:29:15.973 killing process with pid 162462 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@967 -- # kill 162462 00:29:15.973 16:05:45 keyring_linux -- common/autotest_common.sh@972 -- # wait 162462 00:29:16.231 00:29:16.231 real 0m4.982s 00:29:16.231 user 0m9.246s 00:29:16.231 sys 0m1.504s 00:29:16.231 16:05:45 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:16.231 16:05:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:16.231 ************************************ 00:29:16.231 END TEST keyring_linux 00:29:16.231 ************************************ 00:29:16.231 16:05:45 -- common/autotest_common.sh@1142 -- # return 0 00:29:16.231 16:05:45 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:16.231 16:05:45 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:16.231 16:05:45 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:16.231 16:05:45 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:16.231 16:05:45 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:16.231 16:05:45 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:16.232 16:05:45 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:16.232 16:05:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:16.232 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:29:16.232 16:05:45 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:16.232 16:05:45 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:16.232 16:05:45 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:16.232 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:29:18.136 INFO: APP EXITING 00:29:18.136 INFO: killing all VMs 00:29:18.136 INFO: killing vhost app 00:29:18.136 INFO: EXIT DONE 00:29:19.069 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:19.069 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:19.329 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:19.329 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:19.329 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:19.329 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:19.329 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:19.329 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:19.329 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:29:19.329 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:19.329 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:19.329 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:19.329 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:19.329 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:19.329 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:19.329 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:19.329 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:20.699 Cleaning 00:29:20.699 Removing: /var/run/dpdk/spdk0/config 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:20.699 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:20.699 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:20.699 Removing: /var/run/dpdk/spdk1/config 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:20.699 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:20.699 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:20.699 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:20.700 Removing: /var/run/dpdk/spdk2/config 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:20.700 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:20.700 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:20.700 Removing: /var/run/dpdk/spdk3/config 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:20.700 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:20.700 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:20.700 Removing: /var/run/dpdk/spdk4/config 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:20.700 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:20.700 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:20.700 Removing: /dev/shm/bdev_svc_trace.1 00:29:20.700 Removing: /dev/shm/nvmf_trace.0 00:29:20.700 Removing: /dev/shm/spdk_tgt_trace.pid4098136 00:29:20.700 Removing: /var/run/dpdk/spdk0 00:29:20.700 Removing: /var/run/dpdk/spdk1 00:29:20.700 Removing: /var/run/dpdk/spdk2 00:29:20.700 Removing: /var/run/dpdk/spdk3 00:29:20.700 Removing: /var/run/dpdk/spdk4 00:29:20.700 Removing: /var/run/dpdk/spdk_pid100663 00:29:20.700 Removing: /var/run/dpdk/spdk_pid107596 00:29:20.700 Removing: /var/run/dpdk/spdk_pid111953 00:29:20.700 Removing: /var/run/dpdk/spdk_pid111955 00:29:20.700 Removing: /var/run/dpdk/spdk_pid123901 00:29:20.700 Removing: /var/run/dpdk/spdk_pid124389 00:29:20.700 Removing: /var/run/dpdk/spdk_pid124835 00:29:20.700 Removing: /var/run/dpdk/spdk_pid125245 00:29:20.700 Removing: /var/run/dpdk/spdk_pid125823 00:29:20.700 Removing: /var/run/dpdk/spdk_pid126234 00:29:20.700 Removing: /var/run/dpdk/spdk_pid126638 00:29:20.700 Removing: /var/run/dpdk/spdk_pid127165 00:29:20.700 Removing: /var/run/dpdk/spdk_pid129552 00:29:20.700 Removing: /var/run/dpdk/spdk_pid129814 00:29:20.700 Removing: /var/run/dpdk/spdk_pid133606 00:29:20.960 Removing: /var/run/dpdk/spdk_pid133776 00:29:20.960 Removing: /var/run/dpdk/spdk_pid135497 00:29:20.960 Removing: /var/run/dpdk/spdk_pid141052 00:29:20.960 Removing: /var/run/dpdk/spdk_pid141058 00:29:20.960 Removing: /var/run/dpdk/spdk_pid144027 00:29:20.960 Removing: /var/run/dpdk/spdk_pid145364 00:29:20.960 Removing: /var/run/dpdk/spdk_pid146870 00:29:20.960 Removing: /var/run/dpdk/spdk_pid147614 00:29:20.960 Removing: /var/run/dpdk/spdk_pid149018 00:29:20.960 Removing: /var/run/dpdk/spdk_pid149776 00:29:20.960 Removing: /var/run/dpdk/spdk_pid155171 00:29:20.961 Removing: /var/run/dpdk/spdk_pid155560 00:29:20.961 Removing: /var/run/dpdk/spdk_pid155950 00:29:20.961 Removing: /var/run/dpdk/spdk_pid157513 00:29:20.961 Removing: /var/run/dpdk/spdk_pid157810 00:29:20.961 Removing: /var/run/dpdk/spdk_pid158187 00:29:20.961 Removing: /var/run/dpdk/spdk_pid160630 00:29:20.961 Removing: /var/run/dpdk/spdk_pid160645 00:29:20.961 Removing: /var/run/dpdk/spdk_pid162098 00:29:20.961 Removing: /var/run/dpdk/spdk_pid162462 00:29:20.961 Removing: /var/run/dpdk/spdk_pid162592 00:29:20.961 Removing: /var/run/dpdk/spdk_pid18616 00:29:20.961 Removing: /var/run/dpdk/spdk_pid20835 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4096583 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4097322 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4098136 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4098568 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4099368 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4099510 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4100736 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4100749 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4100996 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4102298 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4103228 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4103530 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4103718 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4103927 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4104233 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4104396 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4104558 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4104743 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4105065 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4107414 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4107702 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4107864 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4107872 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4108298 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4108306 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4108659 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4108749 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4108911 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4109044 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4109210 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4109216 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4109702 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4109863 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4110061 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4110229 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4110252 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4110442 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4110593 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4110870 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4111030 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4111190 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4111361 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4111624 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4111776 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4111935 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4112201 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4112370 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4112521 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4112795 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4112958 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4113118 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4113357 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4113541 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4113707 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4113909 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4114135 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4114298 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4114481 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4114685 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4116756 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4143406 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4146012 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4153012 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4156197 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4158639 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4159065 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4163039 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4166765 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4166768 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4167499 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4168195 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4169245 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4169641 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4169764 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4169902 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4170037 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4170039 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4170698 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4171241 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4171898 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4172297 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4172306 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4172562 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4173450 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4174171 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4179405 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4179686 00:29:20.961 Removing: /var/run/dpdk/spdk_pid4182318 00:29:21.220 Removing: /var/run/dpdk/spdk_pid4186020 00:29:21.220 Removing: /var/run/dpdk/spdk_pid4188189 00:29:21.220 Removing: /var/run/dpdk/spdk_pid45491 00:29:21.220 Removing: /var/run/dpdk/spdk_pid48273 00:29:21.220 Removing: /var/run/dpdk/spdk_pid49452 00:29:21.220 Removing: /var/run/dpdk/spdk_pid503 00:29:21.220 Removing: /var/run/dpdk/spdk_pid50664 00:29:21.220 Removing: /var/run/dpdk/spdk_pid50791 00:29:21.220 Removing: /var/run/dpdk/spdk_pid50930 00:29:21.220 Removing: /var/run/dpdk/spdk_pid51061 00:29:21.220 Removing: /var/run/dpdk/spdk_pid51495 00:29:21.220 Removing: /var/run/dpdk/spdk_pid52700 00:29:21.220 Removing: /var/run/dpdk/spdk_pid53432 00:29:21.220 Removing: /var/run/dpdk/spdk_pid53859 00:29:21.220 Removing: /var/run/dpdk/spdk_pid55468 00:29:21.220 Removing: /var/run/dpdk/spdk_pid55896 00:29:21.220 Removing: /var/run/dpdk/spdk_pid56371 00:29:21.220 Removing: /var/run/dpdk/spdk_pid58891 00:29:21.220 Removing: /var/run/dpdk/spdk_pid6028 00:29:21.220 Removing: /var/run/dpdk/spdk_pid65554 00:29:21.220 Removing: /var/run/dpdk/spdk_pid68210 00:29:21.220 Removing: /var/run/dpdk/spdk_pid71881 00:29:21.220 Removing: /var/run/dpdk/spdk_pid72905 00:29:21.220 Removing: /var/run/dpdk/spdk_pid73879 00:29:21.220 Removing: /var/run/dpdk/spdk_pid76416 00:29:21.220 Removing: /var/run/dpdk/spdk_pid7741 00:29:21.220 Removing: /var/run/dpdk/spdk_pid78764 00:29:21.220 Removing: /var/run/dpdk/spdk_pid82971 00:29:21.220 Removing: /var/run/dpdk/spdk_pid82973 00:29:21.220 Removing: /var/run/dpdk/spdk_pid8403 00:29:21.220 Removing: /var/run/dpdk/spdk_pid85749 00:29:21.220 Removing: /var/run/dpdk/spdk_pid85889 00:29:21.220 Removing: /var/run/dpdk/spdk_pid86020 00:29:21.220 Removing: /var/run/dpdk/spdk_pid86407 00:29:21.220 Removing: /var/run/dpdk/spdk_pid86412 00:29:21.220 Removing: /var/run/dpdk/spdk_pid89049 00:29:21.220 Removing: /var/run/dpdk/spdk_pid89476 00:29:21.220 Removing: /var/run/dpdk/spdk_pid92040 00:29:21.220 Removing: /var/run/dpdk/spdk_pid94012 00:29:21.220 Removing: /var/run/dpdk/spdk_pid97371 00:29:21.220 Clean 00:29:21.220 16:05:50 -- common/autotest_common.sh@1451 -- # return 0 00:29:21.220 16:05:50 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:21.220 16:05:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:21.220 16:05:50 -- common/autotest_common.sh@10 -- # set +x 00:29:21.220 16:05:50 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:21.220 16:05:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:21.220 16:05:50 -- common/autotest_common.sh@10 -- # set +x 00:29:21.220 16:05:50 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:21.220 16:05:50 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:21.220 16:05:50 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:21.220 16:05:50 -- spdk/autotest.sh@391 -- # hash lcov 00:29:21.220 16:05:50 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:21.220 16:05:50 -- spdk/autotest.sh@393 -- # hostname 00:29:21.220 16:05:50 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:21.479 geninfo: WARNING: invalid characters removed from testname! 00:29:53.615 16:06:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:53.615 16:06:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:56.896 16:06:25 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:59.421 16:06:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:02.698 16:06:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:05.222 16:06:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:08.501 16:06:37 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:08.501 16:06:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.501 16:06:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:08.501 16:06:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.501 16:06:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.501 16:06:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.501 16:06:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.501 16:06:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.501 16:06:37 -- paths/export.sh@5 -- $ export PATH 00:30:08.501 16:06:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.501 16:06:37 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:08.501 16:06:37 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:08.501 16:06:37 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720793197.XXXXXX 00:30:08.501 16:06:37 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720793197.dmFeJj 00:30:08.501 16:06:37 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:08.501 16:06:37 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:08.501 16:06:37 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:08.501 16:06:37 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:08.501 16:06:37 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:08.501 16:06:37 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:08.501 16:06:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:08.501 16:06:37 -- common/autotest_common.sh@10 -- $ set +x 00:30:08.501 16:06:37 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:08.501 16:06:37 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:08.501 16:06:37 -- pm/common@17 -- $ local monitor 00:30:08.501 16:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.501 16:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.501 16:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.501 16:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:08.501 16:06:37 -- pm/common@21 -- $ date +%s 00:30:08.501 16:06:37 -- pm/common@25 -- $ sleep 1 00:30:08.501 16:06:37 -- pm/common@21 -- $ date +%s 00:30:08.501 16:06:37 -- pm/common@21 -- $ date +%s 00:30:08.501 16:06:37 -- pm/common@21 -- $ date +%s 00:30:08.501 16:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793197 00:30:08.501 16:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793197 00:30:08.501 16:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793197 00:30:08.501 16:06:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793197 00:30:08.501 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793197_collect-vmstat.pm.log 00:30:08.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793197_collect-cpu-temp.pm.log 00:30:08.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793197_collect-cpu-load.pm.log 00:30:08.502 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793197_collect-bmc-pm.bmc.pm.log 00:30:09.071 16:06:38 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:09.071 16:06:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:09.071 16:06:38 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.071 16:06:38 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:09.071 16:06:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:09.071 16:06:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:09.071 16:06:38 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:09.071 16:06:38 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:09.071 16:06:38 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:09.330 16:06:38 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:09.330 16:06:38 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:09.330 16:06:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:09.330 16:06:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:09.330 16:06:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.330 16:06:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:09.330 16:06:38 -- pm/common@44 -- $ pid=172844 00:30:09.330 16:06:38 -- pm/common@50 -- $ kill -TERM 172844 00:30:09.330 16:06:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.330 16:06:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:09.330 16:06:38 -- pm/common@44 -- $ pid=172846 00:30:09.330 16:06:38 -- pm/common@50 -- $ kill -TERM 172846 00:30:09.330 16:06:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.330 16:06:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:09.330 16:06:38 -- pm/common@44 -- $ pid=172848 00:30:09.330 16:06:38 -- pm/common@50 -- $ kill -TERM 172848 00:30:09.330 16:06:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:09.330 16:06:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:09.330 16:06:38 -- pm/common@44 -- $ pid=172875 00:30:09.330 16:06:38 -- pm/common@50 -- $ sudo -E kill -TERM 172875 00:30:09.330 + [[ -n 4012683 ]] 00:30:09.330 + sudo kill 4012683 00:30:09.339 [Pipeline] } 00:30:09.358 [Pipeline] // stage 00:30:09.364 [Pipeline] } 00:30:09.383 [Pipeline] // timeout 00:30:09.389 [Pipeline] } 00:30:09.407 [Pipeline] // catchError 00:30:09.413 [Pipeline] } 00:30:09.432 [Pipeline] // wrap 00:30:09.438 [Pipeline] } 00:30:09.455 [Pipeline] // catchError 00:30:09.465 [Pipeline] stage 00:30:09.468 [Pipeline] { (Epilogue) 00:30:09.483 [Pipeline] catchError 00:30:09.485 [Pipeline] { 00:30:09.502 [Pipeline] echo 00:30:09.505 Cleanup processes 00:30:09.514 [Pipeline] sh 00:30:09.806 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.806 172979 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:09.806 173109 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:09.852 [Pipeline] sh 00:30:10.148 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:10.148 ++ grep -v 'sudo pgrep' 00:30:10.148 ++ awk '{print $1}' 00:30:10.148 + sudo kill -9 172979 00:30:10.161 [Pipeline] sh 00:30:10.447 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:18.570 [Pipeline] sh 00:30:18.856 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:18.856 Artifacts sizes are good 00:30:18.873 [Pipeline] archiveArtifacts 00:30:18.881 Archiving artifacts 00:30:19.095 [Pipeline] sh 00:30:19.378 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:19.393 [Pipeline] cleanWs 00:30:19.404 [WS-CLEANUP] Deleting project workspace... 00:30:19.404 [WS-CLEANUP] Deferred wipeout is used... 00:30:19.412 [WS-CLEANUP] done 00:30:19.414 [Pipeline] } 00:30:19.437 [Pipeline] // catchError 00:30:19.451 [Pipeline] sh 00:30:19.731 + logger -p user.info -t JENKINS-CI 00:30:19.740 [Pipeline] } 00:30:19.760 [Pipeline] // stage 00:30:19.766 [Pipeline] } 00:30:19.785 [Pipeline] // node 00:30:19.791 [Pipeline] End of Pipeline 00:30:19.828 Finished: SUCCESS